Science.gov

Sample records for adaptive kernel method

  1. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed

  2. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  3. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods

    PubMed Central

    Schmidt, Johannes F. M.; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods. PMID:27116675

  4. Adaptive wiener image restoration kernel

    SciTech Connect

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  5. An adaptive kernel smoothing method for classifying Austrosimulium tillyardianum (Diptera: Simuliidae) larval instars.

    PubMed

    Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.

  6. An Adaptive Kernel Smoothing Method for Classifying Austrosimulium tillyardianum (Diptera: Simuliidae) Larval Instars

    PubMed Central

    Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao

    2015-01-01

    In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689

  7. The Adaptive Kernel Neural Network

    DTIC Science & Technology

    1989-10-01

    A neural network architecture for clustering and classification is described. The Adaptive Kernel Neural Network (AKNN) is a density estimation...classification layer. The AKNN retains the inherent parallelism common in neural network models. Its relationship to the kernel estimator allows the network to

  8. Adaptive reproducing kernel particle method for extraction of the cortical surface.

    PubMed

    Xu, Meihe; Thompson, Paul M; Toga, Arthur W

    2006-06-01

    We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable

  9. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    DTIC Science & Technology

    2015-06-01

    ARL-TR-7315 ● JUNE 2015 US Army Research Laboratory Analysis and Implementation of Particle-to- Particle (P2P) Graphics Processor ...Particle-to- Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method by Richard H Haney and Dale Shires...reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection information

  10. Analog forecasting with dynamics-adapted kernels

    NASA Astrophysics Data System (ADS)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  11. Kernel Manifold Alignment for Domain Adaptation.

    PubMed

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors' knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  12. Kernel Manifold Alignment for Domain Adaptation

    PubMed Central

    Tuia, Devis; Camps-Valls, Gustau

    2016-01-01

    The wealth of sensory data coming from different modalities has opened numerous opportunities for data analysis. The data are of increasing volume, complexity and dimensionality, thus calling for new methodological innovations towards multimodal data processing. However, multimodal architectures must rely on models able to adapt to changes in the data distribution. Differences in the density functions can be due to changes in acquisition conditions (pose, illumination), sensors characteristics (number of channels, resolution) or different views (e.g. street level vs. aerial views of a same building). We call these different acquisition modes domains, and refer to the adaptation problem as domain adaptation. In this paper, instead of adapting the trained models themselves, we alternatively focus on finding mappings of the data sources into a common, semantically meaningful, representation domain. This field of manifold alignment extends traditional techniques in statistics such as canonical correlation analysis (CCA) to deal with nonlinear adaptation and possibly non-corresponding data pairs between the domains. We introduce a kernel method for manifold alignment (KEMA) that can match an arbitrary number of data sources without needing corresponding pairs, just few labeled examples in all domains. KEMA has interesting properties: 1) it generalizes other manifold alignment methods, 2) it can align manifolds of very different complexities, performing a discriminative alignment preserving each manifold inner structure, 3) it can define a domain-specific metric to cope with multimodal specificities, 4) it can align data spaces of different dimensionality, 5) it is robust to strong nonlinear feature deformations, and 6) it is closed-form invertible, which allows transfer across-domains and data synthesis. To authors’ knowledge this is the first method addressing all these important issues at once. We also present a reduced-rank version of KEMA for computational

  13. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  14. Robust visual tracking via adaptive kernelized correlation filter

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Wang, Desheng; Liao, Qingmin

    2016-10-01

    Correlation filter based trackers have proved to be very efficient and robust in object tracking with a notable performance competitive with state-of-art trackers. In this paper, we propose a novel object tracking method named Adaptive Kernelized Correlation Filter (AKCF) via incorporating Kernelized Correlation Filter (KCF) with Structured Output Support Vector Machines (SOSVM) learning method in a collaborative and adaptive way, which can effectively handle severe object appearance changes with low computational cost. AKCF works by dynamically adjusting the learning rate of KCF and reversely verifies the intermediate tracking result by adopting online SOSVM classifier. Meanwhile, we bring Color Names in this formulation to effectively boost the performance owing to its rich feature information encoded. Experimental results on several challenging benchmark datasets reveal that our approach outperforms numerous state-of-art trackers.

  15. Kernel current source density method.

    PubMed

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  16. An Adaptive Genetic Association Test Using Double Kernel Machines.

    PubMed

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  17. Kernel method for corrections to scaling.

    PubMed

    Harada, Kenji

    2015-07-01

    Scaling analysis, in which one infers scaling exponents and a scaling function in a scaling law from given data, is a powerful tool for determining universal properties of critical phenomena in many fields of science. However, there are corrections to scaling in many cases, and then the inference problem becomes ill-posed by an uncontrollable irrelevant scaling variable. We propose a new kernel method based on Gaussian process regression to fix this problem generally. We test the performance of the new kernel method for some example cases. In all cases, when the precision of the example data increases, inference results of the new kernel method correctly converge. Because there is no limitation in the new kernel method for the scaling function even with corrections to scaling, unlike in the conventional method, the new kernel method can be widely applied to real data in critical phenomena.

  18. Online learning control using adaptive critic designs with sparse kernel machines.

    PubMed

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  19. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    PubMed

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  20. A novel kernel extreme learning machine algorithm based on self-adaptive artificial bee colony optimisation strategy

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Ji, Jin-Chao

    2016-04-01

    In this paper, we propose a novel learning algorithm, named SABC-MKELM, based on a kernel extreme learning machine (KELM) method for single-hidden-layer feedforward networks. In SABC-MKELM, the combination of Gaussian kernels is used as the activate function of KELM instead of simple fixed kernel learning, where the related parameters of kernels and the weights of kernels can be optimised by a novel self-adaptive artificial bee colony (SABC) approach simultaneously. SABC-MKELM outperforms six other state-of-the-art approaches in general, as it could effectively determine solution updating strategies and suitable parameters to produce a flexible kernel function involved in SABC. Simulations have demonstrated that the proposed algorithm not only self-adaptively determines suitable parameters and solution updating strategies learning from the previous experiences, but also achieves better generalisation performances than several related methods, and the results show good stability of the proposed algorithm.

  1. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    PubMed

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2016-12-07

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  2. Kernel methods for phenotyping complex plant architecture.

    PubMed

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  3. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  4. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  5. Comparison of Kernel Equating and Item Response Theory Equating Methods

    ERIC Educational Resources Information Center

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  6. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    PubMed

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  7. Protoribosome by quantum kernel energy method.

    PubMed

    Huang, Lulu; Krupkin, Miri; Bashan, Anat; Yonath, Ada; Massa, Lou

    2013-09-10

    Experimental evidence suggests the existence of an RNA molecular prebiotic entity, called by us the "protoribosome," which may have evolved in the RNA world before evolution of the genetic code and proteins. This vestige of the RNA world, which possesses all of the capabilities required for peptide bond formation, seems to be still functioning in the heart of all of the contemporary ribosome. Within the modern ribosome this remnant includes the peptidyl transferase center. Its highly conserved nucleotide sequence is suggestive of its robustness under diverse environmental conditions, and hence on its prebiotic origin. Its twofold pseudosymmetry suggests that this entity could have been a dimer of self-folding RNA units that formed a pocket within which two activated amino acids might be accommodated, similar to the binding mode of modern tRNA molecules that carry amino acids or peptidyl moieties. Using quantum mechanics and crystal coordinates, this work studies the question of whether the putative protoribosome has properties necessary to function as an evolutionary precursor to the modern ribosome. The quantum model used in the calculations is density functional theory--B3LYP/3-21G*, implemented using the kernel energy method to make the computations practical and efficient. It occurs that the necessary conditions that would characterize a practicable protoribosome--namely (i) energetic structural stability and (ii) energetically stable attachment to substrates--are both well satisfied.

  8. Cold-moderator scattering kernel methods

    SciTech Connect

    MacFarlane, R. E.

    1998-01-01

    An accurate representation of the scattering of neutrons by the materials used to build cold sources at neutron scattering facilities is important for the initial design and optimization of a cold source, and for the analysis of experimental results obtained using the cold source. In practice, this requires a good representation of the physics of scattering from the material, a method to convert this into observable quantities (such as scattering cross sections), and a method to use the results in a neutron transport code (such as the MCNP Monte Carlo code). At Los Alamos, the authors have been developing these capabilities over the last ten years. The final set of cold-moderator evaluations, together with evaluations for conventional moderator materials, was released in 1994. These materials have been processed into MCNP data files using the NJOY Nuclear Data Processing System. Over the course of this work, they were able to develop a new module for NJOY called LEAPR based on the LEAP + ADDELT code from the UK as modified by D.J. Picton for cold-moderator calculations. Much of the physics for methane came from Picton`s work. The liquid hydrogen work was originally based on a code using the Young-Koppel approach that went through a number of hands in Europe (including Rolf Neef and Guy Robert). It was generalized and extended for LEAPR, and depends strongly on work by Keinert and Sax of the University of Stuttgart. Thus, their collection of cold-moderator scattering kernels is truly an international effort, and they are glad to be able to return the enhanced evaluations and processing techniques to the international community. In this paper, they give sections on the major cold moderator materials (namely, solid methane, liquid methane, and liquid hydrogen) using each section to introduce the relevant physics for that material and to show typical results.

  9. LoCoH: nonparameteric kernel methods for constructing home ranges and utilization distributions.

    PubMed

    Getz, Wayne M; Fortmann-Roe, Scott; Cross, Paul C; Lyons, Andrew J; Ryan, Sadie J; Wilmers, Christopher C

    2007-02-14

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: "fixed sphere-of-influence," or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an "adaptive sphere-of-influence," or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original "fixed-number-of-points," or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  10. LoCoH: Nonparameteric Kernel Methods for Constructing Home Ranges and Utilization Distributions

    PubMed Central

    Getz, Wayne M.; Fortmann-Roe, Scott; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: “fixed sphere-of-influence,” or r-LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an “adaptive sphere-of-influence,” or a-LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a), and compare them to the original “fixed-number-of-points,” or k-LoCoH (all kernels constructed from k-1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a-LoCoH is generally superior to k- and r-LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu). PMID:17299587

  11. LoCoH: Non-parameteric kernel methods for constructing home ranges and utilization distributions

    USGS Publications Warehouse

    Getz, Wayne M.; Fortmann-Roe, Scott; Cross, Paul C.; Lyons, Andrew J.; Ryan, Sadie J.; Wilmers, Christopher C.

    2007-01-01

    Parametric kernel methods currently dominate the literature regarding the construction of animal home ranges (HRs) and utilization distributions (UDs). These methods frequently fail to capture the kinds of hard boundaries common to many natural systems. Recently a local convex hull (LoCoH) nonparametric kernel method, which generalizes the minimum convex polygon (MCP) method, was shown to be more appropriate than parametric kernel methods for constructing HRs and UDs, because of its ability to identify hard boundaries (e.g., rivers, cliff edges) and convergence to the true distribution as sample size increases. Here we extend the LoCoH in two ways: ‘‘fixed sphere-of-influence,’’ or r -LoCoH (kernels constructed from all points within a fixed radius r of each reference point), and an ‘‘adaptive sphere-of-influence,’’ or a -LoCoH (kernels constructed from all points within a radius a such that the distances of all points within the radius to the reference point sum to a value less than or equal to a ), and compare them to the original ‘‘fixed-number-of-points,’’ or k -LoCoH (all kernels constructed from k -1 nearest neighbors of root points). We also compare these nonparametric LoCoH to parametric kernel methods using manufactured data and data collected from GPS collars on African buffalo in the Kruger National Park, South Africa. Our results demonstrate that LoCoH methods are superior to parametric kernel methods in estimating areas used by animals, excluding unused areas (holes) and, generally, in constructing UDs and HRs arising from the movement of animals influenced by hard boundaries and irregular structures (e.g., rocky outcrops). We also demonstrate that a -LoCoH is generally superior to k - and r -LoCoH (with software for all three methods available at http://locoh.cnr.berkeley.edu).

  12. Super-resolution reconstruction algorithm based on adaptive convolution kernel size selection

    NASA Astrophysics Data System (ADS)

    Gao, Hang; Chen, Qian; Sui, Xiubao; Zeng, Junjie; Zhao, Yao

    2016-09-01

    Restricted by the detector technology and optical diffraction limit, the spatial resolution of infrared imaging system is difficult to achieve significant improvement. Super-Resolution (SR) reconstruction algorithm is an effective way to solve this problem. Among them, the SR algorithm based on multichannel blind deconvolution (MBD) estimates the convolution kernel only by low resolution observation images, according to the appropriate regularization constraints introduced by a priori assumption, to realize the high resolution image restoration. The algorithm has been shown effective when each channel is prime. In this paper, we use the significant edges to estimate the convolution kernel and introduce an adaptive convolution kernel size selection mechanism, according to the uncertainty of the convolution kernel size in MBD processing. To reduce the interference of noise, we amend the convolution kernel in an iterative process, and finally restore a clear image. Experimental results show that the algorithm can meet the convergence requirement of the convolution kernel estimation.

  13. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    DTIC Science & Technology

    2016-01-05

    events (and subsequently, their likelihood of occurrence) based on historical evidence of the counts of previous event occurrences. The novel Bayesian...Aug-2014 22-May-2015 Approved for Public Release; Distribution Unlimited Final Report: Sparse Event Modeling with Hierarchical Bayesian Kernel Methods...Sparse Event Modeling with Hierarchical Bayesian Kernel Methods Report Title The research objective of this proposal was to develop a predictive Bayesian

  14. Intelligent classification methods of grain kernels using computer vision analysis

    NASA Astrophysics Data System (ADS)

    Lee, Choon Young; Yan, Lei; Wang, Tianfeng; Lee, Sang Ryong; Park, Cheol Woo

    2011-06-01

    In this paper, a digital image analysis method was developed to classify seven kinds of individual grain kernels (common rice, glutinous rice, rough rice, brown rice, buckwheat, common barley and glutinous barley) widely planted in Korea. A total of 2800 color images of individual grain kernels were acquired as a data set. Seven color and ten morphological features were extracted and processed by linear discriminant analysis to improve the efficiency of the identification process. The output features from linear discriminant analysis were used as input to the four-layer back-propagation network to classify different grain kernel varieties. The data set was divided into three groups: 70% for training, 20% for validation, and 10% for testing the network. The classification experimental results show that the proposed method is able to classify the grain kernel varieties efficiently.

  15. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  16. Adaptive diffusion kernel learning from biological networks for protein function prediction

    PubMed Central

    Sun, Liang; Ji, Shuiwang; Ye, Jieping

    2008-01-01

    Background Machine-learning tools have gained considerable attention during the last few years for analyzing biological networks for protein function prediction. Kernel methods are suitable for learning from graph-based data such as biological networks, as they only require the abstraction of the similarities between objects into the kernel matrix. One key issue in kernel methods is the selection of a good kernel function. Diffusion kernels, the discretization of the familiar Gaussian kernel of Euclidean space, are commonly used for graph-based data. Results In this paper, we address the issue of learning an optimal diffusion kernel, in the form of a convex combination of a set of pre-specified kernels constructed from biological networks, for protein function prediction. Most prior work on this kernel learning task focus on variants of the loss function based on Support Vector Machines (SVM). Their extensions to other loss functions such as the one based on Kullback-Leibler (KL) divergence, which is more suitable for mining biological networks, lead to expensive optimization problems. By exploiting the special structure of the diffusion kernel, we show that this KL divergence based kernel learning problem can be formulated as a simple optimization problem, which can then be solved efficiently. It is further extended to the multi-task case where we predict multiple functions of a protein simultaneously. We evaluate the efficiency and effectiveness of the proposed algorithms using two benchmark data sets. Conclusion Results show that the performance of linearly combined diffusion kernel is better than every single candidate diffusion kernel. When the number of tasks is large, the algorithms based on multiple tasks are favored due to their competitive recognition performance and small computational costs. PMID:18366736

  17. Optimizing spatial filters with kernel methods for BCI applications

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacai; Tang, Jianjun; Yao, Li

    2007-11-01

    Brain Computer Interface (BCI) is a communication or control system in which the user's messages or commands do not depend on the brain's normal output channels. The key step of BCI technology is to find a reliable method to detect the particular brain signals, such as the alpha, beta and mu components in EEG/ECOG trials, and then translate it into usable control signals. In this paper, our objective is to introduce a novel approach that is able to extract the discriminative pattern from the non-stationary EEG signals based on the common spatial patterns(CSP) analysis combined with kernel methods. The basic idea of our Kernel CSP method is performing a nonlinear form of CSP by the use of kernel methods that can efficiently compute the common and distinct components in high dimensional feature spaces related to input space by some nonlinear map. The algorithm described here is tested off-line with dataset I from the BCI Competition 2005. Our experiments show that the spatial filters employed with kernel CSP can effectively extract discriminatory information from single-trial EGOG recorded during imagined movements. The high recognition of linear discriminative rates and computational simplicity of "Kernel Trick" make it a promising method for BCI systems.

  18. Enrichment of the finite element method with reproducing kernel particle method

    SciTech Connect

    Chen, Y.; Liu, W.K.; Uras, R.A.

    1995-07-01

    Based on the reproducing kernel particle method on enrichment procedure is introduced to enhance the effectiveness of the finite element method. The basic concepts for the reproducing kernel particle method are briefly reviewed. By adopting the well-known completeness requirements, a generalized form of the reproducing kernel particle method is developed. Through a combination of these two methods their unique advantages can be utilized. An alternative approach, the multiple field method is also introduced.

  19. A new orientation-adaptive interpolation method.

    PubMed

    Wang, Qing; Ward, Rabab Kreidieh

    2007-04-01

    We propose an isophote-oriented, orientation-adaptive interpolation method. The proposed method employs an interpolation kernel that adapts to the local orientation of isophotes, and the pixel values are obtained through an oriented, bilinear interpolation. We show that, by doing so, the curvature of the interpolated isophotes is reduced, and, thus, zigzagging artifacts are largely suppressed. Analysis and experiments show that images interpolated using the proposed method are visually pleasing and almost artifact free.

  20. A method of smoothed particle hydrodynamics using spheroidal kernels

    NASA Technical Reports Server (NTRS)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  1. Kernel based model parametrization and adaptation with applications to battery management systems

    NASA Astrophysics Data System (ADS)

    Weng, Caihao

    With the wide spread use of energy storage systems, battery state of health (SOH) monitoring has become one of the most crucial challenges in power and energy research, as SOH significantly affects the performance and life cycle of batteries as well as the systems they are interacting with. Identifying the SOH and adapting of the battery energy/power management system accordingly are thus two important challenges for applications such as electric vehicles, smart buildings and hybrid power systems. This dissertation focuses on the identification of lithium ion battery capacity fading, and proposes an on-board implementable model parametrization and adaptation framework for SOH monitoring. Both parametric and non-parametric approaches that are based on kernel functions are explored for the modeling of battery charging data and aging signature extraction. A unified parametric open circuit voltage model is first developed to improve the accuracy of battery state estimation. Several analytical and numerical methods are then investigated for the non-parametric modeling of battery data, among which the support vector regression (SVR) algorithm is shown to be the most robust and consistent approach with respect to data sizes and ranges. For data collected on LiFePO 4 cells, it is shown that the model developed with the SVR approach is able to predict the battery capacity fading with less than 2% error. Moreover, motivated by the initial success of applying kernel based modeling methods for battery SOH monitoring, this dissertation further exploits the parametric SVR representation for real-time battery characterization supported by test data. Through the study of the invariant properties of the support vectors, a kernel based model parametrization and adaptation framework is developed. The high dimensional optimization problem in the learning algorithm could be reformulated as a parameter estimation problem, that can be solved by standard estimation algorithms such as the

  2. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  3. Image classification with densely sampled image windows and generalized adaptive multiple kernel learning.

    PubMed

    Yan, Shengye; Xu, Xinxing; Xu, Dong; Lin, Stephen; Li, Xuelong

    2015-03-01

    We present a framework for image classification that extends beyond the window sampling of fixed spatial pyramids and is supported by a new learning algorithm. Based on the observation that fixed spatial pyramids sample a rather limited subset of the possible image windows, we propose a method that accounts for a comprehensive set of windows densely sampled over location, size, and aspect ratio. A concise high-level image feature is derived to effectively deal with this large set of windows, and this higher level of abstraction offers both efficient handling of the dense samples and reduced sensitivity to misalignment. In addition to dense window sampling, we introduce generalized adaptive l(p)-norm multiple kernel learning (GA-MKL) to learn a robust classifier based on multiple base kernels constructed from the new image features and multiple sets of prelearned classifiers from other classes. With GA-MKL, multiple levels of image features are effectively fused, and information is shared among different classifiers. Extensive evaluation on benchmark datasets for object recognition (Caltech256 and Caltech101) and scene recognition (15Scenes) demonstrate that the proposed method outperforms the state-of-the-art under a broad range of settings.

  4. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  5. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  6. Hardness methods for testing maize kernels.

    PubMed

    Fox, Glen; Manley, Marena

    2009-07-08

    Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect

  7. Kernel Methods on Spike Train Space for Neuroscience: A Tutorial

    NASA Astrophysics Data System (ADS)

    Park, Il Memming; Seth, Sohan; Paiva, Antonio R. C.; Li, Lin; Principe, Jose C.

    2013-07-01

    Over the last decade several positive definite kernels have been proposed to treat spike trains as objects in Hilbert space. However, for the most part, such attempts still remain a mere curiosity for both computational neuroscientists and signal processing experts. This tutorial illustrates why kernel methods can, and have already started to, change the way spike trains are analyzed and processed. The presentation incorporates simple mathematical analogies and convincing practical examples in an attempt to show the yet unexplored potential of positive definite functions to quantify point processes. It also provides a detailed overview of the current state of the art and future challenges with the hope of engaging the readers in active participation.

  8. A kernel machine-based fMRI physiological noise removal method.

    PubMed

    Song, Xiaomu; Chen, Nan-kuei; Gaur, Pooja

    2014-02-01

    Functional magnetic resonance imaging (fMRI) technique with blood oxygenation level dependent (BOLD) contrast is a powerful tool for noninvasive mapping of brain function under task and resting states. The removal of cardiac- and respiration-induced physiological noise in fMRI data has been a significant challenge as fMRI studies seek to achieve higher spatial resolutions and characterize more subtle neuronal changes. The low temporal sampling rate of most multi-slice fMRI experiments often causes aliasing of physiological noise into the frequency range of BOLD activation signal. In addition, changes of heartbeat and respiration patterns also generate physiological fluctuations that have similar frequencies with BOLD activation. Most existing physiological noise-removal methods either place restrictive limitations on image acquisition or utilize filtering or regression based post-processing algorithms, which cannot distinguish the frequency-overlapping BOLD activation and the physiological noise. In this work, we address the challenge of physiological noise removal via the kernel machine technique, where a nonlinear kernel machine technique, kernel principal component analysis, is used with a specifically identified kernel function to differentiate BOLD signal from the physiological noise of the frequency. The proposed method was evaluated in human fMRI data acquired from multiple task-related and resting state fMRI experiments. A comparison study was also performed with an existing adaptive filtering method. The results indicate that the proposed method can effectively identify and reduce the physiological noise in fMRI data. The comparison study shows that the proposed method can provide comparable or better noise removal performance than the adaptive filtering approach.

  9. A Kernel Machine-based fMRI Physiological Noise Removal Method

    PubMed Central

    Song, Xiaomu; Chen, Nan-kuei; Gaur, Pooja

    2013-01-01

    Functional magnetic resonance imaging (fMRI) technique with blood oxygenation level dependent (BOLD) contrast is a powerful tool for noninvasive mapping of brain function under task and resting states. The removal of cardiac- and respiration-induced physiological noise in fMRI data has been a significant challenge as fMRI studies seek to achieve higher spatial resolutions and characterize more subtle neuronal changes. The low temporal sampling rate of most multi-slice fMRI experiments often causes aliasing of physiological noise into the frequency range of BOLD activation signal. In addition, changes of heartbeat and respiration patterns also generate physiological fluctuations that have similar frequencies with BOLD activation. Most existing physiological noise-removal methods either place restrictive limitations on image acquisition or utilize filtering or regression based post-processing algorithms, which cannot distinguish the frequency-overlapping BOLD activation and the physiological noise. In this work, we address the challenge of physiological noise removal via the kernel machine technique, where a nonlinear kernel machine technique, kernel principal component analysis, is used with a specifically identified kernel function to differentiate BOLD signal from the physiological noise of the frequency. The proposed method was evaluated in human fMRI data acquired from multiple task-related and resting state fMRI experiments. A comparison study was also performed with an existing adaptive filtering method. The results indicate that the proposed method can effectively identify and reduce the physiological noise in fMRI data. The comparison study shows that the proposed method can provide comparable or better noise removal performance than the adaptive filtering approach. PMID:24321306

  10. NIRS method for precise identification of Fusarium damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Development of scab resistant wheat varieties may be enhanced by non-destructive evaluation of kernels for Fusarium damaged kernels (FDKs) and deoxynivalenol (DON) levels. Fusarium infection generally affects kernel appearance, but insect damage and other fungi can cause similar symptoms. Also, some...

  11. Kernel weights optimization for error diffusion halftoning method

    NASA Astrophysics Data System (ADS)

    Fedoseev, Victor

    2015-02-01

    This paper describes a study to find the best error diffusion kernel for digital halftoning under various restrictions on the number of non-zero kernel coefficients and their set of values. As an objective measure of quality, WSNR was used. The problem of multidimensional optimization was solved numerically using several well-known algorithms: Nelder- Mead, BFGS, and others. The study found a kernel function that provides a quality gain of about 5% in comparison with the best of the commonly used kernel introduced by Floyd and Steinberg. Other kernels obtained allow to significantly reduce the computational complexity of the halftoning process without reducing its quality.

  12. Hierarchical Adaptive Regression Kernels for Regression with Functional Predictors

    PubMed Central

    Woodard, Dawn B.; Crainiceanu, Ciprian; Ruppert, David

    2013-01-01

    We propose a new method for regression using a parsimonious and scientifically interpretable representation of functional predictors. Our approach is designed for data that exhibit features such as spikes, dips, and plateaus whose frequency, location, size, and shape varies stochastically across subjects. We propose Bayesian inference of the joint functional and exposure models, and give a method for efficient computation. We contrast our approach with existing state-of-the-art methods for regression with functional predictors, and show that our method is more effective and efficient for data that include features occurring at varying locations. We apply our methodology to a large and complex dataset from the Sleep Heart Health Study, to quantify the association between sleep characteristics and health outcomes. Software and technical appendices are provided in online supplemental materials. PMID:24293988

  13. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  14. An improved method for rebinning kernels from cylindrical to Cartesian coordinates.

    PubMed

    Rathee, S; McClean, B A; Field, C

    1993-01-01

    This paper describes the errors in rebinning photon dose point spread functions and pencil beam kernels (PBKs) from cylindrical to Cartesian coordinates. An area overlap method, which assumes that the fractional energy deposited per unit volume remains constant within cylindrical voxels, provides large deviations (up to 20%) in rebinned Cartesian voxels while conserving the total energy. A modified area overlap method is presented that allows the fractional energy deposited per unit volume within cylindrical voxels to vary according to an interpolating function. This method rebins the kernels accurately in each Cartesian voxel while conserving the total energy. The dose distributions were computed for a partially blocked beam of uniform fluence using the Cartesian coordinate kernel and the kernels rebinned by both methods. The kernel rebinned by the modified area overlap method provided errors less than 1.7%, while the kernel rebinned by the area overlap method gave errors up to 4.4%.

  15. Real-time detection of generic objects using objectness estimation and locally adaptive regression kernels matching

    NASA Astrophysics Data System (ADS)

    Zheng, Zhihui; Gao, Lei; Xiao, Liping; Zhou, Bin; Gao, Shibo

    2015-12-01

    Our purpose is to develop a detection algorithm capable of searching for generic interest objects in real time without large training sets and long-time training stages. Instead of the classical sliding window object detection paradigm, we employ an objectness measure to produce a small set of candidate windows efficiently using Binarized Normed Gradients and a Laplacian of Gaussian-like filter. We then extract Locally Adaptive Regression Kernels (LARKs) as descriptors both from a model image and the candidate windows which measure the likeness of a pixel to its surroundings. Using a matrix cosine similarity measure, the algorithm yields a scalar resemblance map, indicating the likelihood of similarity between the model and the candidate windows. By employing nonparametric significance tests and non-maxima suppression, we detect the presence of objects similar to the given model. Experiments show that the proposed detection paradigm can automatically detect the presence, the number, as well as location of similar objects to the given model. The high quality and efficiency of our method make it suitable for real time multi-category object detection applications.

  16. Kernel energy method applied to vesicular stomatitis virus nucleoprotein

    PubMed Central

    Huang, Lulu; Massa, Lou; Karle, Jerome

    2009-01-01

    The kernel energy method (KEM) is applied to the vesicular stomatitis virus (VSV) nucleoprotein (PDB ID code 2QVJ). The calculations employ atomic coordinates from the crystal structure at 2.8-Å resolution, except for the hydrogen atoms, whose positions were modeled by using the computer program HYPERCHEM. The calculated KEM ab initio limited basis Hartree-Fock energy for the full 33,175 atom molecule (including hydrogen atoms) is obtained. In the KEM, a full biological molecule is represented by smaller “kernels” of atoms, greatly simplifying the calculations. Collections of kernels are well suited for parallel computation. VSV consists of five similar chains, and we obtain the energy of each chain. Interchain hydrogen bonds contribute to the interaction energy between the chains. These hydrogen bond energies are calculated in Hartree-Fock (HF) and Møller-Plesset perturbation theory to second order (MP2) approximations by using 6–31G** basis orbitals. The correlation energy, included in MP2, is a significant factor in the interchain hydrogen bond energies. PMID:19188588

  17. Scene sketch generation using mixture of gradient kernels and adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Paheding, Sidike; Essa, Almabrok; Asari, Vijayan

    2016-04-01

    This paper presents a simple but effective algorithm for scene sketch generation from input images. The proposed algorithm combines the edge magnitudes of directional Prewitt differential gradient kernels with Kirsch kernels at each pixel position, and then encodes them into an eight bit binary code which encompasses local edge and texture information. In this binary encoding step, relative variance is employed to determine the object shape in each local region. Using relative variance enables object sketch extraction totally adaptive to any shape structure. On the other hand, the proposed technique does not require any parameter to adjust output and it is robust to edge density and noise. Two standard databases are used to show the effectiveness of the proposed framework.

  18. Integral Transform Methods: A Critical Review of Various Kernels

    NASA Astrophysics Data System (ADS)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  19. Osteoarthritis classification using self organizing map based on gabor kernel and contrast-limited adaptive histogram equalization.

    PubMed

    Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery

    2013-01-01

    Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4.

  20. Osteoarthritis Classification Using Self Organizing Map Based on Gabor Kernel and Contrast-Limited Adaptive Histogram Equalization

    PubMed Central

    Anifah, Lilik; Purnama, I Ketut Eddy; Hariadi, Mochamad; Purnomo, Mauridhi Hery

    2013-01-01

    Localization is the first step in osteoarthritis (OA) classification. Manual classification, however, is time-consuming, tedious, and expensive. The proposed system is designed as decision support system for medical doctors to classify the severity of knee OA. A method has been proposed here to localize a joint space area for OA and then classify it in 4 steps to classify OA into KL-Grade 0, KL-Grade 1, KL-Grade 2, KL-Grade 3 and KL-Grade 4, which are preprocessing, segmentation, feature extraction, and classification. In this proposed system, right and left knee detection was performed by employing the Contrast-Limited Adaptive Histogram Equalization (CLAHE) and the template matching. The Gabor kernel, row sum graph and moment methods were used to localize the junction space area of knee. CLAHE is used for preprocessing step, i.e.to normalize the varied intensities. The segmentation process was conducted using the Gabor kernel, template matching, row sum graph and gray level center of mass method. Here GLCM (contrast, correlation, energy, and homogeinity) features were employed as training data. Overall, 50 data were evaluated for training and 258 data for testing. Experimental results showed the best performance by using gabor kernel with parameters α=8, θ=0, Ψ=[0 π/2], γ=0,8, N=4 and with number of iterations being 5000, momentum value 0.5 and α0=0.6 for the classification process. The run gave classification accuracy rate of 93.8% for KL-Grade 0, 70% for KL-Grade 1, 4% for KL-Grade 2, 10% for KL-Grade 3 and 88.9% for KL-Grade 4. PMID:23525188

  1. Adaptive inverse control of neural spatiotemporal spike patterns with a reproducing kernel Hilbert space (RKHS) framework.

    PubMed

    Li, Lin; Park, Il Memming; Brockmeier, Austin; Chen, Badong; Seth, Sohan; Francis, Joseph T; Sanchez, Justin C; Príncipe, José C

    2013-07-01

    The precise control of spiking in a population of neurons via applied electrical stimulation is a challenge due to the sparseness of spiking responses and neural system plasticity. We pose neural stimulation as a system control problem where the system input is a multidimensional time-varying signal representing the stimulation, and the output is a set of spike trains; the goal is to drive the output such that the elicited population spiking activity is as close as possible to some desired activity, where closeness is defined by a cost function. If the neural system can be described by a time-invariant (homogeneous) model, then offline procedures can be used to derive the control procedure; however, for arbitrary neural systems this is not tractable. Furthermore, standard control methodologies are not suited to directly operate on spike trains that represent both the target and elicited system response. In this paper, we propose a multiple-input multiple-output (MIMO) adaptive inverse control scheme that operates on spike trains in a reproducing kernel Hilbert space (RKHS). The control scheme uses an inverse controller to approximate the inverse of the neural circuit. The proposed control system takes advantage of the precise timing of the neural events by using a Schoenberg kernel defined directly in the space of spike trains. The Schoenberg kernel maps the spike train to an RKHS and allows linear algorithm to control the nonlinear neural system without the danger of converging to local minima. During operation, the adaptation of the controller minimizes a difference defined in the spike train RKHS between the system and the target response and keeps the inverse controller close to the inverse of the current neural circuit, which enables adapting to neural perturbations. The results on a realistic synthetic neural circuit show that the inverse controller based on the Schoenberg kernel outperforms the decoding accuracy of other models based on the conventional rate

  2. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  3. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  4. Probabilistic seismic hazard assessment of Italy using kernel estimation methods

    NASA Astrophysics Data System (ADS)

    Zuccolo, Elisa; Corigliano, Mirko; Lai, Carlo G.

    2013-07-01

    A representation of seismic hazard is proposed for Italy based on the zone-free approach developed by Woo (BSSA 86(2):353-362, 1996a), which is based on a kernel estimation method governed by concepts of fractal geometry and self-organized seismicity, not requiring the definition of seismogenic zoning. The purpose is to assess the influence of seismogenic zoning on the results obtained for the probabilistic seismic hazard analysis (PSHA) of Italy using the standard Cornell's method. The hazard has been estimated for outcropping rock site conditions in terms of maps and uniform hazard spectra for a selected site, with 10 % probability of exceedance in 50 years. Both spectral acceleration and spectral displacement have been considered as ground motion parameters. Differences in the results of PSHA between the two methods are compared and discussed. The analysis shows that, in areas such as Italy, characterized by a reliable earthquake catalog and in which faults are generally not easily identifiable, a zone-free approach can be considered a valuable tool to address epistemic uncertainty within a logic tree framework.

  5. Discriminating between HuR and TTP binding sites using the k-spectrum kernel method

    PubMed Central

    Goldberg, Debra S.; Dowell, Robin

    2017-01-01

    Background The RNA binding proteins (RBPs) human antigen R (HuR) and Tristetraprolin (TTP) are known to exhibit competitive binding but have opposing effects on the bound messenger RNA (mRNA). How cells discriminate between the two proteins is an interesting problem. Machine learning approaches, such as support vector machines (SVMs), may be useful in the identification of discriminative features. However, this method has yet to be applied to studies of RNA binding protein motifs. Results Applying the k-spectrum kernel to a support vector machine (SVM), we first verified the published binding sites of both HuR and TTP. Additional feature engineering highlighted the U-rich binding preference of HuR and AU-rich binding preference for TTP. Domain adaptation along with multi-task learning was used to predict the common binding sites. Conclusion The distinction between HuR and TTP binding appears to be subtle content features. HuR prefers strongly U-rich sequences whereas TTP prefers AU-rich as with increasing A content, the sequences are more likely to be bound only by TTP. Our model is consistent with competitive binding of the two proteins, particularly at intermediate AU-balanced sequences. This suggests that fine changes in the A/U balance within a untranslated region (UTR) can alter the binding and subsequent stability of the message. Both feature engineering and domain adaptation emphasized the extent to which these proteins recognize similar general sequence features. This work suggests that the k-spectrum kernel method could be useful when studying RNA binding proteins and domain adaptation techniques such as feature augmentation could be employed particularly when examining RBPs with similar binding preferences. PMID:28333956

  6. A Comparison of the Kernel Equating Method with Traditional Equating Methods Using SAT[R] Data

    ERIC Educational Resources Information Center

    Liu, Jinghua; Low, Albert C.

    2008-01-01

    This study applied kernel equating (KE) in two scenarios: equating to a very similar population and equating to a very different population, referred to as a distant population, using SAT[R] data. The KE results were compared to the results obtained from analogous traditional equating methods in both scenarios. The results indicate that KE results…

  7. Adaptive Algebraic Multigrid Methods

    SciTech Connect

    Brezina, M; Falgout, R; MacLachlan, S; Manteuffel, T; McCormick, S; Ruge, J

    2004-04-09

    Our ability to simulate physical processes numerically is constrained by our ability to solve the resulting linear systems, prompting substantial research into the development of multiscale iterative methods capable of solving these linear systems with an optimal amount of effort. Overcoming the limitations of geometric multigrid methods to simple geometries and differential equations, algebraic multigrid methods construct the multigrid hierarchy based only on the given matrix. While this allows for efficient black-box solution of the linear systems associated with discretizations of many elliptic differential equations, it also results in a lack of robustness due to assumptions made on the near-null spaces of these matrices. This paper introduces an extension to algebraic multigrid methods that removes the need to make such assumptions by utilizing an adaptive process. The principles which guide the adaptivity are highlighted, as well as their application to algebraic multigrid solution of certain symmetric positive-definite linear systems.

  8. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels

    PubMed Central

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378

  9. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    PubMed

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  10. Autonomic function assessment in Parkinson's disease patients using the kernel method and entrainment techniques.

    PubMed

    Kamal, Ahmed K

    2007-01-01

    The experimental procedure of lowering and raising a leg while the subject is in the supine position is considered to stimulate and entrain the autonomic nervous system of fifteen untreated patients with Parkinson's disease and fifteen age and sex matched control subjects. The assessment of autonomic function for each group is achieved using an algorithm based on Volterra kernel estimation. By applying this algorithm and considering the process of lowering and raising a leg as stimulus input and the Heart Rate Variability signal (HRV) as output for system identification, a mathematical model is expressed as integral equations. The integral equations are considered and fixed for control subjects and Parkinson's disease patients so that the identification method reduced to the determination of the values within the integral called kernels, resulting in an integral equations whose input-output behavior is nearly identical to that of the system in both healthy subjects and Parkinson's disease patients. The model for each group contains the linear part (first order kernel) and quadratic part (second order kernel). A difference equation model was employed to represent the system for both control subjects and patients with Parkinson's disease. The results show significant difference in first order kernel(impulse response) and second order kernel (mesh diagram) for each group. Using first order kernel and second order kernel, it is possible to assess autonomic function qualitatively and quantitatively in both groups.

  11. Aflatoxin detection in whole corn kernels using hyperspectral methods

    NASA Astrophysics Data System (ADS)

    Casasent, David; Chen, Xue-Wen

    2004-03-01

    Hyperspectral (HS) data for the inspection of whole corn kernels for aflatoxin is considered. The high-dimensionality of HS data requires feature extraction or selection for good classifier generalization. For fast and inexpensive data collection, only several features (λ responses) can be used. These are obtained by feature selection from the full HS response. A new high dimensionality branch and bound (HDBB) feature selection algorithm is used; it is found to be optimum, fast and very efficient. Initial results indicate that HS data is very promising for aflatoxin detection in whole kernel corn.

  12. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    DTIC Science & Technology

    2006-01-01

    successfully used by the machine learning community for pattern recognition and image denoising [14]. A Gaussian kernel was used by Cremers et al. [8] for...matrix M, where φi ∈ RNd . Using Singular Value Decomposition ( SVD ), the covariance matrix 1nMM T is decomposed as: UΣUT = 1 n MMT (1) where U is a

  13. The collapsed cone algorithm for 192Ir dosimetry using phantom-size adaptive multiple-scatter point kernels

    NASA Astrophysics Data System (ADS)

    Carlsson Tedgren, Åsa; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-01

    /phantom for which low doses at phantom edges can be overestimated by 2-5 %. It would be possible to improve the situation by using a point kernel for multiple-scatter dose adapted to the patient/phantom dimensions at hand.

  14. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    PubMed

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    /phantom for which low doses at phantom edges can be overestimated by 2-5 %. It would be possible to improve the situation by using a point kernel for multiple-scatter dose adapted to the patient/phantom dimensions at hand.

  15. Accelerated Adaptive Integration Method

    PubMed Central

    2015-01-01

    Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083

  16. Single-trial EEG-based emotion recognition using kernel Eigen-emotion pattern and adaptive support vector machine.

    PubMed

    Liu, Yi-Hung; Wu, Chien-Te; Kao, Yung-Hwa; Chen, Ya-Ting

    2013-01-01

    Single-trial electroencephalography (EEG)-based emotion recognition enables us to perform fast and direct assessments of human emotional states. However, previous works suggest that a great improvement on the classification accuracy of valence and arousal levels is still needed. To address this, we propose a novel emotional EEG feature extraction method: kernel Eigen-emotion pattern (KEEP). An adaptive SVM is also proposed to deal with the problem of learning from imbalanced emotional EEG data sets. In this study, a set of pictures from IAPS are used for emotion induction. Results based on seven participants show that KEEP gives much better classification results than the widely-used EEG frequency band power features. Also, the adaptive SVM greatly improves classification performance of commonly-adopted SVM classifier. Combined use of KEEP and adaptive SVM can achieve high average valence and arousal classification rates of 73.42% and 73.57%. The highest classification rates for valence and arousal are 80% and 79%, respectively. The results are very promising.

  17. Kernel density estimator methods for Monte Carlo radiation transport

    NASA Astrophysics Data System (ADS)

    Banerjee, Kaushik

    In this dissertation, the Kernel Density Estimator (KDE), a nonparametric probability density estimator, is studied and used to represent global Monte Carlo (MC) tallies. KDE is also employed to remove the singularities from two important Monte Carlo tallies, namely point detector and surface crossing flux tallies. Finally, KDE is also applied to accelerate the Monte Carlo fission source iteration for criticality problems. In the conventional MC calculation histograms are used to represent global tallies which divide the phase space into multiple bins. Partitioning the phase space into bins can add significant overhead to the MC simulation and the histogram provides only a first order approximation to the underlying distribution. The KDE method is attractive because it can estimate MC tallies in any location within the required domain without any particular bin structure. Post-processing of the KDE tallies is sufficient to extract detailed, higher order tally information for an arbitrary grid. The quantitative and numerical convergence properties of KDE tallies are also investigated and they are shown to be superior to conventional histograms as well as the functional expansion tally developed by Griesheimer. Monte Carlo point detector and surface crossing flux tallies are two widely used tallies but they suffer from an unbounded variance. As a result, the central limit theorem can not be used for these tallies to estimate confidence intervals. By construction, KDE tallies can be directly used to estimate flux at a point but the variance of this point estimate does not converge as 1/N, which is not unexpected for a point quantity. However, an improved approach is to modify both point detector and surface crossing flux tallies directly by using KDE within a variance reduction approach by taking advantage of the fact that KDE estimates the underlying probability density function. This methodology is demonstrated by several numerical examples and demonstrates that

  18. Development of a single kernel analysis method for detection of 2-acetyl-1-pyrroline in aromatic rice germplasm

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Solid-phase microextraction (SPME) in conjunction with GC/MS was used to distinguish non-aromatic rice (Oryza sativa, L.) kernels from aromatic rice kernels. In this method, single kernels along with 10 µl of 0.1 ng 2,4,6-Trimethylpyridine (TMP) were placed in sealed vials and heated to 80oC for 18...

  19. Kernel phase and kernel amplitude in Fizeau imaging

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin J. S.

    2016-12-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  20. Advances in Adaptive Control Methods

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2009-01-01

    This poster presentation describes recent advances in adaptive control technology developed by NASA. Optimal Control Modification is a novel adaptive law that can improve performance and robustness of adaptive control systems. A new technique has been developed to provide an analytical method for computing time delay stability margin for adaptive control systems.

  1. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1].

  2. Active impulsive noise control using maximum correntropy with adaptive kernel size

    NASA Astrophysics Data System (ADS)

    Lu, Lu; Zhao, Haiquan

    2017-03-01

    The active noise control (ANC) based on the principle of superposition is an attractive method to attenuate the noise signals. However, the impulsive noise in the ANC systems will degrade the performance of the controller. In this paper, a filtered-x recursive maximum correntropy (FxRMC) algorithm is proposed based on the maximum correntropy criterion (MCC) to reduce the effect of outliers. The proposed FxRMC algorithm does not requires any priori information of the noise characteristics and outperforms the filtered-x least mean square (FxLMS) algorithm for impulsive noise. Meanwhile, in order to adjust the kernel size of FxRMC algorithm online, a recursive approach is proposed through taking into account the past estimates of error signals over a sliding window. Simulation and experimental results in the context of active impulsive noise control demonstrate that the proposed algorithms achieve much better performance than the existing algorithms in various noise environments.

  3. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering.

    PubMed

    Elazab, Ahmed; Wang, Changmiao; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-01-01

    An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

  4. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering

    PubMed Central

    Wang, Changmiao; Jia, Fucang; Wu, Jianhuang; Li, Guanglin

    2015-01-01

    An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity. PMID:26793269

  5. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  6. A Fast Multiple-Kernel Method with Applications to Detect Gene-Environment Interaction

    PubMed Central

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M.; Worrall, Bradford B.; Williams, Stephen R.; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-01-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. While most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multi-kernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multi-factor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the EM algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous “fastKM” algorithm for multi-kernel analysis that is based on a low-rank approximation to the nuisance-effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. PMID:26139508

  7. A spectral-spatial kernel-based method for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Li, Li; Ge, Hongwei; Gao, Jianqiang

    2017-02-01

    Spectral-based classification methods have gained increasing attention in hyperspectral imagery classification. Nevertheless, the spectral cannot fully represent the inherent spatial distribution of the imagery. In this paper, a spectral-spatial kernel-based method for hyperspectral imagery classification is proposed. Firstly, the spatial feature was extracted by using area median filtering (AMF). Secondly, the result of the AMF was used to construct spatial feature patch according to different window sizes. Finally, using the kernel technique, the spectral feature and the spatial feature were jointly used for the classification through a support vector machine (SVM) formulation. Therefore, for hyperspectral imagery classification, the proposed method was called spectral-spatial kernel-based support vector machine (SSF-SVM). To evaluate the proposed method, experiments are performed on three hyperspectral images. The experimental results show that an improvement is possible with the proposed technique in most of the real world classification problems.

  8. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  9. 3D MR image denoising using rough set and kernel PCA method.

    PubMed

    Phophalia, Ashish; Mitra, Suman K

    2017-02-01

    In this paper, we have presented a two stage method, using kernel principal component analysis (KPCA) and rough set theory (RST), for denoising volumetric MRI data. A rough set theory (RST) based clustering technique has been used for voxel based processing. The method groups similar voxels (3D cubes) using class and edge information derived from noisy input. Each clusters thus formed now represented via basis vector. These vectors now projected into kernel space and PCA is performed in the feature space. This work is motivated by idea that under Rician noise MRI data may be non-linear and kernel mapping will help to define linear separator between these clusters/basis vectors thus used for image denoising. We have further investigated various kernels for Rician noise for different noise levels. The best kernel is then selected on the performance basis over PSNR and structure similarity (SSIM) measures. The work has been compared with state-of-the-art methods under various measures for synthetic and real databases.

  10. Early discriminant method of infected kernel based on the erosion effects of laser ultrasonics

    NASA Astrophysics Data System (ADS)

    Fan, Chao

    2015-07-01

    To discriminate the infected kernel of the wheat as early as possible, a new kind of detection method of hidden insects, especially in their egg and larvae stage, was put forward based on the erosion effect of the laser ultrasonic in this paper. The surface of the grain is exposured by the pulsed laser, the energy of which is absorbed and the ultrasonic is excited, and the infected kernel can be recognized by appropriate signal analyzing. Firstly, the detection principle was given based on the classical wave equation and the platform was established. Then, the detected ultrasonic signal was processed both in the time domain and the frequency domain by using FFT and DCT , and six significant features were selected as the characteristic parameters of the signal by the method of stepwise discriminant analysis. Finally, a BP neural network was designed by using these six parameters as the input to classify the infected kernels from the normal ones. Numerous experiments were performed by using twenty wheat varieties, the results shown that the the infected kernels can be recognized effectively, and the false negative error and the false positive error was 12% and 9% respectively, the discriminant method of the infected kernels based on the erosion effect of laser ultrasonics is feasible.

  11. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    PubMed

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  12. A texture-based rolling bearing fault diagnosis scheme using adaptive optimal kernel time frequency representation and uniform local binary patterns

    NASA Astrophysics Data System (ADS)

    Chen, Haizhou; Wang, Jiaxu; Li, Junyang; Tang, Baoping

    2017-03-01

    This paper presents a new scheme for rolling bearing fault diagnosis using texture features extracted from the time-frequency representations (TFRs) of the signal. To derive the proposed texture features, firstly adaptive optimal kernel time frequency representation (AOK-TFR) is applied to extract TFRs of the signal which essentially describe the energy distribution characteristics of the signal over time and frequency domain. Since the AOK-TFR uses the signal-dependent radially Gaussian kernel that adapts over time, it can exactly track the minor variations in the signal and provide an excellent time-frequency concentration in noisy environment. Simulation experiments are furthermore performed in comparison with common time-frequency analysis methods under different noisy conditions. Secondly, the uniform local binary pattern (uLBP), which is a computationally simple and noise-resistant texture analysis method, is used to calculate the histograms from the TFRs to characterize rolling bearing fault information. Finally, the obtained histogram feature vectors are input into the multi-SVM classifier for pattern recognition. We validate the effectiveness of the proposed scheme by several experiments, and comparative results demonstrate that the new fault diagnosis technique performs better than most state-of-the-art techniques, and yet we find that the proposed algorithm possess the adaptivity and noise resistance qualities that could be very useful in real industrial applications.

  13. Kernel methods for HyMap imagery knowledge discovery

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustavo; Gomez-Chova, Luis; Calpe-Maravilla, Javier; Soria-Olivas, Emilio; Martin-Guerrero, Jose D.; Moreno, Jose

    2004-02-01

    In this paper, we propose a kernel-based approach for hyperspectral knowledge discovery, which is defined as a process that involves three steps: pre-processing, modeling and analysis of the classifier. Firstly, we select the most representative bands analyzing the surrogate and main splits of a Classification And Regression Trees (CART) approach. This yields three datasets with different reduced input dimensionality (6, 3 and 2 bands, respectively) along with the original one (128 bands). Secondly, we develop several crop cover classifiers for each of them. We use Support Vector Machines (SVM) and analyze its performance in terms of efficiency and robustness, as compared to multilayer perceptrons (MLP) and radial basis functions (RBF) neural networks. Suitability to real-time working conditions, whenever a preprocessing stage is not possible, is evaluated by considering models with and without the CART-based feature selection stage. Finally, we analyze the support vectors distribution in the input space and through Principal Component Analysis (PCA) in order to gain knowledge about the problem. Several conclusions are drawn: (1) SVM yield better outcomes than neural networks; (2) training neural models is unfeasible when working with high dimensional spaces; (3) SVM perform similarly in the four classification scenarios, which indicates that noisy bands are successfully detected and (4) relevant bands for the classification are identified.

  14. A Fourier-series-based kernel-independent fast multipole method

    SciTech Connect

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-07-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  15. Parallel multilevel adaptive methods

    NASA Technical Reports Server (NTRS)

    Dowell, B.; Govett, M.; Mccormick, S.; Quinlan, D.

    1989-01-01

    The progress of a project for the design and analysis of a multilevel adaptive algorithm (AFAC/HM/) targeted for the Navier Stokes Computer is discussed. The results of initial timing tests of AFAC, coupled with multigrid and an efficient load balancer, on a 16-node Intel iPSC/2 hypercube are included. The results of timing tests are presented.

  16. A simple and fast method for computing the relativistic Compton Scattering Kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Kershaw, David S.; Prasad, Manoj K.; Beason, J. Douglas

    1986-01-01

    The Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution is analytically reduced to a single integral, which can then be rapidly evaluated in a variety of ways. A particularly fast method for numerically computing this single integral is presented. This is, to the authors' knowledge, the first correct computation of the Compton scattering kernel.

  17. Standard Errors of the Kernel Equating Methods under the Common-Item Design.

    ERIC Educational Resources Information Center

    Liou, Michelle; And Others

    This research derives simplified formulas for computing the standard error of the frequency estimation method for equating score distributions that are continuized using a uniform or Gaussian kernel function (P. W. Holland, B. F. King, and D. T. Thayer, 1989; Holland and Thayer, 1987). The simplified formulas are applicable to equating both the…

  18. A 3-dimensional Bergman kernel method with applications to rectangular domains

    NASA Astrophysics Data System (ADS)

    Bock, S.; Falcao, M. I.; Gurlebeck, K.; Malonek, H.

    2006-05-01

    In this paper we revisit the so-called Bergman kernel method--BKM--for solving conformal mapping problems and propose a generalized BKM-approach to extend the theory to three-dimensional mapping problems. A special software package for quaternions was developed for the numerical experiments.

  19. Modelling of the control of heart rate by breathing using a kernel method.

    PubMed

    Ahmed, A K; Fakhouri, S Y; Harness, J B; Mearns, A J

    1986-03-07

    The process of the breathing (input) to the heart rate (output) of man is considered for system identification by the input-output relationship, using a mathematical model expressed as integral equations. The integral equation is considered and fixed so that the identification method reduces to the determination of the values within the integral, called kernels, resulting in an integral equation whose input-output behaviour is nearly identical to that of the system. This paper uses an algorithm of kernel identification of the Volterra series which greatly reduces the computational burden and eliminates the restriction of using white Gaussian input as a test signal. A second-order model is the most appropriate for a good estimate of the system dynamics. The model contains the linear part (first-order kernel) and quadratic part (second-order kernel) in parallel, and so allows for the possibility of separation between the linear and non-linear elements of the process. The response of the linear term exhibits the oscillatory input and underdamped nature of the system. The application of breathing as input to the system produces an oscillatory term which may be attributed to the nature of sinus node of the heart being sensitive to the modulating signal the breathing wave. The negative-on diagonal seems to cause the dynamic asymmetry of the total response of the system which opposes the oscillatory nature of the first kernel related to the restraining force present in the respiratory heart rate system. The presence of the positive-off diagonal of the second-order kernel of respiratory control of heart rate is an indication of an escape-like phenomenon in the system.

  20. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    PubMed

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  1. The Continuized Log-Linear Method: An Alternative to the Kernel Method of Continuization in Test Equating

    ERIC Educational Resources Information Center

    Wang, Tianyou

    2008-01-01

    Von Davier, Holland, and Thayer (2004) laid out a five-step framework of test equating that can be applied to various data collection designs and equating methods. In the continuization step, they presented an adjusted Gaussian kernel method that preserves the first two moments. This article proposes an alternative continuization method that…

  2. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level.

  3. The complex variable reproducing kernel particle method for elasto-plasticity problems

    NASA Astrophysics Data System (ADS)

    Chen, Li; Cheng, Yumin

    2010-05-01

    On the basis of reproducing kernel particle method (RKPM), using complex variable theory, the complex variable reproducing kernel particle method (CVRKPM) is discussed in this paper. The advantage of the CVRKPM is that the correction function of a two-dimensional problem is formed with one-dimensional basis function when the shape function is formed. Then the CVRKPM is applied to solve two-dimensional elasto-plasticity problems. The Galerkin weak form is employed to obtain the discretized system equation, the penalty method is used to apply the essential boundary conditions. And then, the CVRKPM for two-dimensional elasto-plasticity problems is formed, the corresponding formulae are obtained, and the Newton-Raphson method is used in the numerical implementation. Three numerical examples are given to show that this method in this paper is effective for elasto-plasticity analysis.

  4. The context-tree kernel for strings.

    PubMed

    Cuturi, Marco; Vert, Jean-Philippe

    2005-10-01

    We propose a new kernel for strings which borrows ideas and techniques from information theory and data compression. This kernel can be used in combination with any kernel method, in particular Support Vector Machines for string classification, with notable applications in proteomics. By using a Bayesian averaging framework with conjugate priors on a class of Markovian models known as probabilistic suffix trees or context-trees, we compute the value of this kernel in linear time and space while only using the information contained in the spectrum of the considered strings. This is ensured through an adaptation of a compression method known as the context-tree weighting algorithm. Encouraging classification results are reported on a standard protein homology detection experiment, showing that the context-tree kernel performs well with respect to other state-of-the-art methods while using no biological prior knowledge.

  5. A new efficient hybrid intelligent method for nonlinear dynamical systems identification: The Wavelet Kernel Fuzzy Neural Network

    NASA Astrophysics Data System (ADS)

    Loussifi, Hichem; Nouri, Khaled; Benhadj Braiek, Naceur

    2016-03-01

    In this paper a hybrid computational intelligent approach of combining kernel methods with wavelet Multi-resolution Analysis (MRA) is presented for fuzzy wavelet network construction and initialization. Mother wavelets are used as activation functions for the neural network structure, and as kernel functions in the machine learning process. By choosing precise values of scale parameters based on the windowed scalogram representation of the Continuous Wavelet Transform (CWT), a set of kernel parameters is taken to construct the proposed Wavelet Kernel based Fuzzy Neural Network (WK-FNN) with an efficient initialization technique based on the use of wavelet kernels in Support Vector Machine for Regression (SVMR). Simulation examples are given to test usability and effectiveness of the proposed hybrid intelligent method in the system identification of dynamic plants and in the prediction of a chaotic time series. It is seen that the proposed WK-FNN achieves higher accuracy and has good performance as compared to other methods.

  6. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System.

    PubMed

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.

  7. The method of tailored sensitivity kernels for GRACE mass change estimates

    NASA Astrophysics Data System (ADS)

    Groh, Andreas; Horwath, Martin

    2016-04-01

    To infer mass changes (such as mass changes of an ice sheet) from time series of GRACE spherical harmonic solutions, two basic approaches (with many variants) exist: The regional integration approach (or direct approach) is based on surface mass changes (equivalent water height, EWH) from GRACE and integrates those with specific integration kernels. The forward modeling approach (or mascon approach, or inverse approach) prescribes a finite set of mass change patterns and adjusts the amplitudes of those patterns (in a least squares sense) to the GRACE gravity field changes. The present study reviews the theoretical framework of both approaches. We recall that forward modeling approaches ultimately estimate mass changes by linear functionals of the gravity field changes. Therefore, they implicitly apply sensitivity kernels and may be considered as special realizations of the regional integration approach. We show examples for sensitivity kernels intrinsic to forward modeling approaches. We then propose to directly tailor sensitivity kernels (or in other words: mass change estimators) by a formal optimization procedure that minimizes the sum of propagated GRACE solution errors and leakage errors. This approach involves the incorporation of information on the structure of GRACE errors and the structure of those mass change signals that are most relevant for leakage errors. We discuss the realization of this method, as applied within the ESA "Antarctic Ice Sheet CCI (Climate Change Initiative)" project. Finally, results for the Antarctic Ice Sheet in terms of time series of mass changes of individual drainage basins and time series of gridded EWH changes are presented.

  8. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    PubMed Central

    Yuan, Xianfeng; Song, Mumin; Zhou, Fengyu; Chen, Zhumin; Li, Yan

    2015-01-01

    The wheeled robots have been successfully applied in many aspects, such as industrial handling vehicles, and wheeled service robots. To improve the safety and reliability of wheeled robots, this paper presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel) support vector machine (SVM) and Dempster-Shafer (D-S) fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA) models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM) classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs) are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods. PMID:26229526

  9. Verification and large deformation analysis using the reproducing kernel particle method

    SciTech Connect

    Beckwith, Frank

    2015-09-01

    The reproducing kernel particle method (RKPM) is a meshless method used to solve general boundary value problems using the principle of virtual work. RKPM corrects the kernel approximation by introducing reproducing conditions which force the method to be complete to arbritrary order polynomials selected by the user. Effort in recent years has led to the implementation of RKPM within the Sierra/SM physics software framework. The purpose of this report is to investigate convergence of RKPM for verification and validation purposes as well as to demonstrate the large deformation capability of RKPM in problems where the finite element method is known to experience difficulty. Results from analyses using RKPM are compared against finite element analysis. A host of issues associated with RKPM are identified and a number of potential improvements are discussed for future work.

  10. A kernel method for calculating effective radiative forcing in transient climate simulations

    NASA Astrophysics Data System (ADS)

    Larson, E. J. L.; Portmann, R. W.

    2015-12-01

    Effective radiative forcing (ERF) is calculated as the flux change at the top of the atmosphere, after allowing fast adjustments, due to a forcing agent such as greenhouse gasses or volcanic events. Accurate estimates of the ERF are necessary in order to understand the drivers of climate change. ERF cannot be observed directly and is difficult to estimate from indirect observations due to the complexity of climate responses to individual forcing factors. We present a new method of calculating ERF using a kernel populated from a time series of a model variable (e.g. global mean surface temperature) in a CO2 step change experiment. The top of atmosphere (TOA) radiative imbalance has the best noise tolerance for retrieving the ERF of the model variables we tested. We compare the kernel method with the energy balance method for estimating ERF in the CMIP5 models. The energy balance method uses the regression between the TOA imbalance and temperature change in a CO2 step change experiment to estimate the climate feedback parameter. It then assumes the feedback parameter is constant to calculate the forcing time series. This method is sensitive to the number of years chosen for the regression and the nonlinearity in the regression leads to a bias. We quantify the sensitivities and biases of these methods and compare their estimates of forcing. The kernel method is more accurate for models in which a linear fit is a poor approximation for the relationship between temperature change and TOA imbalance.

  11. Scalable Kernel Methods and Algorithms for General Sequence Analysis

    ERIC Educational Resources Information Center

    Kuksa, Pavel

    2011-01-01

    Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…

  12. Single corn kernel aflatoxin B1 extraction and analysis method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  13. Noninvasive reconstruction of cardiac transmembrane potentials using a kernelized extreme learning method

    NASA Astrophysics Data System (ADS)

    Jiang, Mingfeng; Zhang, Heng; Zhu, Lingyan; Cao, Li; Wang, Yaming; Xia, Ling; Gong, Yinglan

    2015-04-01

    Non-invasively reconstructing the cardiac transmembrane potentials (TMPs) from body surface potentials can act as a regression problem. The support vector regression (SVR) method is often used to solve the regression problem, however the computational complexity of the SVR training algorithm is usually intensive. In this paper, another learning algorithm, termed as extreme learning machine (ELM), is proposed to reconstruct the cardiac transmembrane potentials. Moreover, ELM can be extended to single-hidden layer feed forward neural networks with kernel matrix (kernelized ELM), which can achieve a good generalization performance at a fast learning speed. Based on the realistic heart-torso models, a normal and two abnormal ventricular activation cases are applied for training and testing the regression model. The experimental results show that the ELM method can perform a better regression ability than the single SVR method in terms of the TMPs reconstruction accuracy and reconstruction speed. Moreover, compared with the ELM method, the kernelized ELM method features a good approximation and generalization ability when reconstructing the TMPs.

  14. Adaptive learning in complex reproducing kernel Hilbert spaces employing Wirtinger's subgradients.

    PubMed

    Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios

    2012-03-01

    This paper presents a wide framework for non-linear online supervised learning tasks in the context of complex valued signal processing. The (complex) input data are mapped into a complex reproducing kernel Hilbert space (RKHS), where the learning phase is taking place. Both pure complex kernels and real kernels (via the complexification trick) can be employed. Moreover, any convex, continuous and not necessarily differentiable function can be used to measure the loss between the output of the specific system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. In order to derive analytically the subgradients, the principles of the (recently developed) Wirtinger's calculus in complex RKHS are exploited. Furthermore, both linear and widely linear (in RKHS) estimation filters are considered. To cope with the problem of increasing memory requirements, which is present in almost all online schemes in RKHS, the sparsification scheme, based on projection onto closed balls, has been adopted. We demonstrate the effectiveness of the proposed framework in a non-linear channel identification task, a non-linear channel equalization problem and a quadrature phase shift keying equalization scheme, using both circular and non circular synthetic signal sources.

  15. A robust, high-throughput method for computing maize ear, cob, and kernel attributes automatically from images.

    PubMed

    Miller, Nathan D; Haase, Nicholas J; Lee, Jonghyun; Kaeppler, Shawn M; de Leon, Natalia; Spalding, Edgar P

    2017-01-01

    Grain yield of the maize plant depends on the sizes, shapes, and numbers of ears and the kernels they bear. An automated pipeline that can measure these components of yield from easily-obtained digital images is needed to advance our understanding of this globally important crop. Here we present three custom algorithms designed to compute such yield components automatically from digital images acquired by a low-cost platform. One algorithm determines the average space each kernel occupies along the cob axis using a sliding-window Fourier transform analysis of image intensity features. A second counts individual kernels removed from ears, including those in clusters. A third measures each kernel's major and minor axis after a Bayesian analysis of contour points identifies the kernel tip. Dimensionless ear and kernel shape traits that may interrelate yield components are measured by principal components analysis of contour point sets. Increased objectivity and speed compared to typical manual methods are achieved without loss of accuracy as evidenced by high correlations with ground truth measurements and simulated data. Millimeter-scale differences among ear, cob, and kernel traits that ranged more than 2.5-fold across a diverse group of inbred maize lines were resolved. This system for measuring maize ear, cob, and kernel attributes is being used by multiple research groups as an automated Web service running on community high-throughput computing and distributed data storage infrastructure. Users may create their own workflow using the source code that is staged for download on a public repository.

  16. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice

    PubMed Central

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel “trick” concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available. PMID:27555865

  17. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  18. A Novel Cortical Thickness Estimation Method based on Volumetric Laplace-Beltrami Operator and Heat Kernel

    PubMed Central

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J.; Wang, Yalin

    2015-01-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the grey matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer’s disease (AD) and mild cognitive impairment (MCI) on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces. PMID:25700360

  19. A novel cortical thickness estimation method based on volumetric Laplace-Beltrami operator and heat kernel.

    PubMed

    Wang, Gang; Zhang, Xiaofeng; Su, Qingtang; Shi, Jie; Caselli, Richard J; Wang, Yalin

    2015-05-01

    Cortical thickness estimation in magnetic resonance imaging (MRI) is an important technique for research on brain development and neurodegenerative diseases. This paper presents a heat kernel based cortical thickness estimation algorithm, which is driven by the graph spectrum and the heat kernel theory, to capture the gray matter geometry information from the in vivo brain magnetic resonance (MR) images. First, we construct a tetrahedral mesh that matches the MR images and reflects the inherent geometric characteristics. Second, the harmonic field is computed by the volumetric Laplace-Beltrami operator and the direction of the steamline is obtained by tracing the maximum heat transfer probability based on the heat kernel diffusion. Thereby we can calculate the cortical thickness information between the point on the pial and white matter surfaces. The new method relies on intrinsic brain geometry structure and the computation is robust and accurate. To validate our algorithm, we apply it to study the thickness differences associated with Alzheimer's disease (AD) and mild cognitive impairment (MCI) on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. Our preliminary experimental results on 151 subjects (51 AD, 45 MCI, 55 controls) show that the new algorithm may successfully detect statistically significant difference among patients of AD, MCI and healthy control subjects. Our computational framework is efficient and very general. It has the potential to be used for thickness estimation on any biological structures with clearly defined inner and outer surfaces.

  20. A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4

    NASA Technical Reports Server (NTRS)

    Park, Young-Keun; Fahrenthold, Eric P.

    2004-01-01

    An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.

  1. An adaptive level set method

    SciTech Connect

    Milne, Roger Brent

    1995-12-01

    This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.

  2. Kernel reconstruction methods for Doppler broadening - Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    NASA Astrophysics Data System (ADS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-04-01

    This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.

  3. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    DOE PAGES

    Ducru, Pablo; Josey, Colin; Dibert, Karia; ...

    2017-01-25

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of themore » operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin,Tmax]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less

  4. Method of adaptive artificial viscosity

    NASA Astrophysics Data System (ADS)

    Popov, I. V.; Fryazinov, I. V.

    2011-09-01

    A new finite-difference method for the numerical solution of gas dynamics equations is proposed. This method is a uniform monotonous finite-difference scheme of second-order approximation on time and space outside of domains of shock and compression waves. This method is based on inputting adaptive artificial viscosity (AAV) into gas dynamics equations. In this paper, this method is analyzed for 2D geometry. The testing computations of the movement of contact discontinuities and shock waves and the breakup of discontinuities are demonstrated.

  5. Development of low-frequency kernel-function aerodynamics for comparison with time-dependent finite-difference methods

    NASA Technical Reports Server (NTRS)

    Bland, S. R.

    1982-01-01

    Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.

  6. Bioactive compounds in cashew nut (Anacardium occidentale L.) kernels: effect of different shelling methods.

    PubMed

    Trox, Jennifer; Vadivel, Vellingiri; Vetter, Walter; Stuetz, Wolfgang; Scherbaum, Veronika; Gola, Ute; Nohr, Donatus; Biesalski, Hans Konrad

    2010-05-12

    In the present study, the effects of various conventional shelling methods (oil-bath roasting, direct steam roasting, drying, and open pan roasting) as well as a novel "Flores" hand-cracking method on the levels of bioactive compounds of cashew nut kernels were investigated. The raw cashew nut kernels were found to possess appreciable levels of certain bioactive compounds such as beta-carotene (9.57 microg/100 g of DM), lutein (30.29 microg/100 g of DM), zeaxanthin (0.56 microg/100 g of DM), alpha-tocopherol (0.29 mg/100 g of DM), gamma-tocopherol (1.10 mg/100 g of DM), thiamin (1.08 mg/100 g of DM), stearic acid (4.96 g/100 g of DM), oleic acid (21.87 g/100 g of DM), and linoleic acid (5.55 g/100 g of DM). All of the conventional shelling methods including oil-bath roasting, steam roasting, drying, and open pan roasting revealed a significant reduction, whereas the Flores hand-cracking method exhibited similar levels of carotenoids, thiamin, and unsaturated fatty acids in cashew nuts when compared to raw unprocessed samples.

  7. Introducing kernel based morphology as an enhancement method for mass classification on mammography.

    PubMed

    Amirzadi, Azardokht; Azmi, Reza

    2013-04-01

    Since mammography images are in low-contrast, applying enhancement techniques as a pre-processing step are wisely recommended in the classification of the abnormal lesions into benign or malignant. A new kind of structural enhancement is proposed by morphological operator, which introduces an optimal Gaussian Kernel primitive, the kernel parameters are optimized the use of Genetic Algorithm. We also take the advantages of optical density (OD) images to promote the diagnosis rate. The proposed enhancement method is applied on both the gray level (GL) images and their OD values respectively, as a result morphological patterns get bolder on GL images; then, local binary patterns are extracted from this kind of images. Applying the enhancement method on OD images causes more differences between the values therefore a threshold method is applied toremove some background pixels. Those pixels that are more eligible to be mass are remained, and some statistical texture features are extracted from their equivalent GL images. Support vector machine is used for both approaches and the final decision is made by combining these two classifiers. The classification performance rate is evaluated by Az, under the receiver operating characteristic curve. The designed method yields Az = 0.9231, which demonstrates good results.

  8. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  9. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2016-01-01

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research. PMID:27754428

  10. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. The complex variable reproducing kernel particle method for the analysis of Kirchhoff plates

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.; Ma, H. P.

    2015-03-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for the bending problem of arbitrary Kirchhoff plates is presented. The advantage of the CVRKPM is that the shape function of a two-dimensional problem is obtained one-dimensional basis function. The CVRKPM is used to form the approximation function of the deflection of a Kirchhoff plate, the Galerkin weak form of the bending problem of Kirchhoff plates is adopted to obtain the discretized system equations, and the penalty method is employed to enforce the essential boundary conditions, then the corresponding formulae of the CVRKPM for the bending problem of Kirchhoff plates are presented in detail. Several numerical examples of Kirchhoff plates with different geometry and loads are given to demonstrate that the CVRKPM in this paper has higher computational precision and efficiency than the reproducing kernel particle method under the same node distribution. And the influences of the basis function, weight function, scaling factor, node distribution and penalty factor on the computational precision of the CVRKPM in this paper are discussed.

  12. An effective meshfree reproducing kernel method for buckling analysis of cylindrical shells with and without cutouts

    NASA Astrophysics Data System (ADS)

    Sadamoto, S.; Ozdemir, M.; Tanaka, S.; Taniguchi, K.; Yu, T. T.; Bui, T. Q.

    2017-02-01

    The paper is concerned with eigen buckling analysis of curvilinear shells with and without cutouts by an effective meshfree method. In particular, shallow shell, cylinder and perforated cylinder buckling problems are considered. A Galerkin meshfree reproducing kernel (RK) approach is then developed. The present meshfree curvilinear shell model is based on Reissner-Mindlin plate formulation, which allows the transverse shear deformation of the curved shells. There are five degrees of freedom per node (i.e., three displacements and two rotations). In this setting, the meshfree interpolation functions are derived from the RK. A singular kernel is introduced to impose the essential boundary conditions because of the RK shape functions, which do not automatically possess the Kronecker delta property. The stiffness matrix is derived using the stabilized conforming nodal integration technique. A convected coordinate system is introduced into the formulation to deal with the curvilinear surface. More importantly, the RKs taken here are used not only for the interpolation of the curved geometry, but also for the approximation of field variables. Several numerical examples with shallow shells and full cylinder models are considered, and the critical buckling loads and their buckling mode shapes are calculated by the meshfree eigenvalue analysis and examined. To show the accuracy and performance of the developed meshfree method, the computed critical buckling loads and mode shapes are compared with reference solutions based on boundary domain element, finite element and analytical methods.

  13. Multiple genetic variant association testing by collapsing and kernel methods with pedigree or population structured data.

    PubMed

    Schaid, Daniel J; McDonnell, Shannon K; Sinnwell, Jason P; Thibodeau, Stephen N

    2013-07-01

    Searching for rare genetic variants associated with complex diseases can be facilitated by enriching for diseased carriers of rare variants by sampling cases from pedigrees enriched for disease, possibly with related or unrelated controls. This strategy, however, complicates analyses because of shared genetic ancestry, as well as linkage disequilibrium among genetic markers. To overcome these problems, we developed broad classes of "burden" statistics and kernel statistics, extending commonly used methods for unrelated case-control data to allow for known pedigree relationships, for autosomes and the X chromosome. Furthermore, by replacing pedigree-based genetic correlation matrices with estimates of genetic relationships based on large-scale genomic data, our methods can be used to account for population-structured data. By simulations, we show that the type I error rates of our developed methods are near the asymptotic nominal levels, allowing rapid computation of P-values. Our simulations also show that a linear weighted kernel statistic is generally more powerful than a weighted "burden" statistic. Because the proposed statistics are rapid to compute, they can be readily used for large-scale screening of the association of genomic sequence data with disease status.

  14. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  15. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.

    PubMed

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-07-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  16. Adaptive kernel-based image denoising employing semi-parametric regularization.

    PubMed

    Bouboulis, Pantelis; Slavakis, Konstantinos; Theodoridis, Sergios

    2010-06-01

    The main contribution of this paper is the development of a novel approach, based on the theory of Reproducing Kernel Hilbert Spaces (RKHS), for the problem of noise removal in the spatial domain. The proposed methodology has the advantage that it is able to remove any kind of additive noise (impulse, gaussian, uniform, etc.) from any digital image, in contrast to the most commonly used denoising techniques, which are noise dependent. The problem is cast as an optimization task in a RKHS, by taking advantage of the celebrated Representer Theorem in its semi-parametric formulation. The semi-parametric formulation, although known in theory, has so far found limited, to our knowledge, application. However, in the image denoising problem, its use is dictated by the nature of the problem itself. The need for edge preservation naturally leads to such a modeling. Examples verify that in the presence of gaussian noise the proposed methodology performs well compared to wavelet based technics and outperforms them significantly in the presence of impulse or mixed noise.

  17. A high-order fast method for computing convolution integral with smooth kernel

    NASA Astrophysics Data System (ADS)

    Qiang, Ji

    2010-02-01

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  18. A high-order fast method for computing convolution integral with smooth kernel

    SciTech Connect

    Qiang, Ji

    2009-09-28

    In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.

  19. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  20. A Generalized Grid-Based Fast Multipole Method for Integrating Helmholtz Kernels.

    PubMed

    Parkkinen, Pauli; Losilla, Sergio A; Solala, Eelis; Toivanen, Elias A; Xu, Wen-Hua; Sundholm, Dage

    2017-02-14

    A grid-based fast multipole method (GB-FMM) for optimizing three-dimensional (3D) numerical molecular orbitals in the bubbles and cube double basis has been developed and implemented. The present GB-FMM method is a generalization of our recently published GB-FMM approach for numerically calculating electrostatic potentials and two-electron interaction energies. The orbital optimization is performed by integrating the Helmholtz kernel in the double basis. The steep part of the functions in the vicinity of the nuclei is represented by one-center bubbles functions, whereas the remaining cube part is expanded on an equidistant 3D grid. The integration of the bubbles part is treated by using one-center expansions of the Helmholtz kernel in spherical harmonics multiplied with modified spherical Bessel functions of the first and second kind, analogously to the numerical inward and outward integration approach for calculating two-electron interaction potentials in atomic structure calculations. The expressions and algorithms for massively parallel calculations on general purpose graphics processing units (GPGPU) are described. The accuracy and the correctness of the implementation has been checked by performing Hartree-Fock self-consistent-field calculations (HF-SCF) on H2, H2O, and CO. Our calculations show that an accuracy of 10(-4) to 10(-7) Eh can be reached in HF-SCF calculations on general molecules.

  1. Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.

    PubMed

    Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E

    2010-09-17

    Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms.

  2. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  3. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  4. Robust Optimal Adaptive Control Method with Large Adaptive Gain

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2009-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.

  5. Multi-feature-based robust face detection and coarse alignment method via multiple kernel learning

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Di; He, Jun; Yu, Lejun; Wu, Xuewen

    2015-10-01

    Face detection and alignment are two crucial tasks to face recognition which is a hot topic in the field of defense and security, whatever for the safety of social public, personal property as well as information and communication security. Common approaches toward the treatment of these tasks in recent years are often of three types: template matching-based, knowledge-based and machine learning-based, which are always separate-step, high computation cost or fragile robust. After deep analysis on a great deal of Chinese face images without hats, we propose a novel face detection and coarse alignment method, which is inspired by those three types of methods. It is multi-feature fusion with Simple Multiple Kernel Learning1 (Simple-MKL) algorithm. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve promising results.

  6. Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel

    SciTech Connect

    Blair, J.; Machorro, E.; Luttman, A.

    2013-03-01

    The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.

  7. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  8. Construction of a consistent high-definition spatio-temporal atlas of the developing brain using adaptive kernel regression.

    PubMed

    Serag, Ahmed; Aljabar, Paul; Ball, Gareth; Counsell, Serena J; Boardman, James P; Rutherford, Mary A; Edwards, A David; Hajnal, Joseph V; Rueckert, Daniel

    2012-02-01

    Medical imaging has shown that, during early development, the brain undergoes more changes in size, shape and appearance than at any other time in life. A better understanding of brain development requires a spatio-temporal atlas that characterizes the dynamic changes during this period. In this paper we present an approach for constructing a 4D atlas of the developing brain, between 28 and 44 weeks post-menstrual age at time of scan, using T1 and T2 weighted MR images from 204 premature neonates. The method used for the creation of the average 4D atlas utilizes non-rigid registration between all pairs of images to eliminate bias in the atlas toward any of the original images. In addition, kernel regression is used to produce age-dependent anatomical templates. A novelty in our approach is the use of a time-varying kernel width, to overcome the variations in the distribution of subjects at different ages. This leads to an atlas that retains a consistent level of detail at every time-point. Comparisons between the resulting atlas and atlases constructed using affine and non-rigid registration are presented. The resulting 4D atlas has greater anatomic definition than currently available 4D atlases created using various affine and non-rigid registration approaches, an important factor in improving registrations between the atlas and individual subjects. Also, the resulting 4D atlas can serve as a good representative of the population of interest as it reflects both global and local changes. The atlas is publicly available at www.brain-development.org.

  9. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    ERIC Educational Resources Information Center

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  10. Prediction of posttranslational modification sites from amino acid sequences with kernel methods.

    PubMed

    Xu, Yan; Wang, Xiaobo; Wang, Yongcui; Tian, Yingjie; Shao, Xiaojian; Wu, Ling-Yun; Deng, Naiyang

    2014-03-07

    Post-translational modification (PTM) is the chemical modification of a protein after its translation and one of the later steps in protein biosynthesis for many proteins. It plays an important role which modifies the end product of gene expression and contributes to biological processes and diseased conditions. However, the experimental methods for identifying PTM sites are both costly and time-consuming. Hence computational methods are highly desired. In this work, a novel encoding method PSPM (position-specific propensity matrices) is developed. Then a support vector machine (SVM) with the kernel matrix computed by PSPM is applied to predict the PTM sites. The experimental results indicate that the performance of new method is better or comparable with the existing methods. Therefore, the new method is a useful computational resource for the identification of PTM sites. A unified standalone software PTMPred is developed. It can be used to predict all types of PTM sites if the user provides the training datasets. The software can be freely downloaded from http://www.aporc.org/doc/wiki/PTMPred.

  11. On radiative transfer using synthetic kernel and simplified spherical harmonics methods in linearly anisotropically scattering media

    NASA Astrophysics Data System (ADS)

    Altaç, Zekeriya

    2014-11-01

    The Synthetic Kernel (SKN) method is employed to a 3D absorbing, emitting and linearly anisotropically scattering inhomogeneous medium. Standard SKN approximation is applied only to the diffusive components of the radiative transfer equations. An alternative SKN (S KN*) method is also derived in full 3-D generality by extending the approximation to the direct wall contributions. Complete sets of boundary conditions for both SKN approaches are rigorously obtained. The simplified spherical harmonics (P2N-1 or SP2N-1) and simplified double spherical harmonics (DPN-1 or SDPN-1) equations for linearly anisotropically scattering homogeneous medium are also derived. Resulting full P2N-1 and DPN-1 (or SP2N-1 and SDPN-1) equations are cast as diagonalized second order coupled diffusion-like equations. By this analysis, it is shown that the SKN method is a high-order approximation, and simply by the selection of full or half range Gauss-Legendre quadratures, S KN* equations become identical to P2N-1 or DPN-1 (or SP2N-1 or SDPN-1) equations. Numerical verification of all methods presented is carried out using a 1D participating isotropic slab medium. The SKN method proves to be more accurate than S KN* approximation, but it is analytically more involved. It is shown that the S KN* with proposed BCs converges with increasing order of approximation, and the BCs are applicable to SPN or SDPN methods.

  12. Super-Drizzle: Applications of Adaptive Kernel Regression in Astronomical Imaging

    DTIC Science & Technology

    2006-01-01

    and reconstruction. In contrast to the parametric methods, which rely on a specific model of the signal of interest, non-parametric methods rely on...the data itself to dictate the structure of the model , in which case this implicit model is referred to as a regression function. We promote the use...takes advantage of a generic model that is appropriate for reconstructing images contam- inated with different noise models , including additive Gaussian

  13. Modern industrial simulation tools: Kernel-level integration of high performance parallel processing, object-oriented numerics, and adaptive finite element analysis. Final report, July 16, 1993--September 30, 1997

    SciTech Connect

    Deb, M.K.; Kennon, S.R.

    1998-04-01

    A cooperative R&D effort between industry and the US government, this project, under the HPPP (High Performance Parallel Processing) initiative of the Dept. of Energy, started the investigations into parallel object-oriented (OO) numerics. The basic goal was to research and utilize the emerging technologies to create a physics-independent computational kernel for applications using adaptive finite element method. The industrial team included Computational Mechanics Co., Inc. (COMCO) of Austin, TX (as the primary contractor), Scientific Computing Associates, Inc. (SCA) of New Haven, CT, Texaco and CONVEX. Sandia National Laboratory (Albq., NM) was the technology partner from the government side. COMCO had the responsibility of the main kernel design and development, SCA had the lead in parallel solver technology and guidance on OO technologies was Sandia`s main expertise in this venture. CONVEX and Texaco supported the partnership by hardware resource and application knowledge, respectively. As such, a minimum of fifty-percent cost-sharing was provided by the industry partnership during this project. This report describes the R&D activities and provides some details about the prototype kernel and example applications.

  14. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-01

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine. PACS number(s): 87.55.Gh.

  15. Application of dose kernel calculation using a simplified Monte Carlo method to treatment plan for scanned proton beams.

    PubMed

    Mizutani, Shohei; Takada, Yoshihisa; Kohno, Ryosuke; Hotta, Kenji; Tansho, Ryohei; Akimoto, Tetsuo

    2016-03-08

    Full Monte Carlo (FMC) calculation of dose distribution has been recognized to have superior accuracy, compared with the pencil beam algorithm (PBA). However, since the FMC methods require long calculation time, it is difficult to apply them to routine treatment planning at present. In order to improve the situation, a simplified Monte Carlo (SMC) method has been introduced to the dose kernel calculation applicable to dose optimization procedure for the proton pencil beam scanning. We have evaluated accuracy of the SMC calculation by comparing a result of the dose kernel calculation using the SMC method with that using the FMC method in an inhomogeneous phantom. The dose distribution obtained by the SMC method was in good agreement with that obtained by the FMC method. To assess the usefulness of SMC calculation in clinical situations, we have compared results of the dose calculation using the SMC with those using the PBA method for three clinical cases of tumor treatment. The dose distributions calculated with the PBA dose kernels appear to be homogeneous in the planning target volumes (PTVs). In practice, the dose distributions calculated with the SMC dose kernels with the spot weights optimized with the PBA method show largely inhomogeneous dose distributions in the PTVs, while those with the spot weights optimized with the SMC method have moderately homogeneous distributions in the PTVs. Calculation using the SMC method is faster than that using the GEANT4 by three orders of magnitude. In addition, the graphic processing unit (GPU) boosts the calculation speed by 13 times for the treatment planning using the SMC method. Thence, the SMC method will be applicable to routine clinical treatment planning for reproduction of the complex dose distribution more accurately than the PBA method in a reasonably short time by use of the GPU-based calculation engine.

  16. A Kernel-free Boundary Integral Method for Elliptic Boundary Value Problems ⋆

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2013-01-01

    This paper presents a class of kernel-free boundary integral (KFBI) methods for general elliptic boundary value problems (BVPs). The boundary integral equations reformulated from the BVPs are solved iteratively with the GMRES method. During the iteration, the boundary and volume integrals involving Green's functions are approximated by structured grid-based numerical solutions, which avoids the need to know the analytical expressions of Green's functions. The KFBI method assumes that the larger regular domain, which embeds the original complex domain, can be easily partitioned into a hierarchy of structured grids so that fast elliptic solvers such as the fast Fourier transform (FFT) based Poisson/Helmholtz solvers or those based on geometric multigrid iterations are applicable. The structured grid-based solutions are obtained with standard finite difference method (FDM) or finite element method (FEM), where the right hand side of the resulting linear system is appropriately modified at irregular grid nodes to recover the formal accuracy of the underlying numerical scheme. Numerical results demonstrating the efficiency and accuracy of the KFBI methods are presented. It is observed that the number of GM-RES iterations used by the method for solving isotropic and moderately anisotropic BVPs is independent of the sizes of the grids that are employed to approximate the boundary and volume integrals. With the standard second-order FEMs and FDMs, the KFBI method shows a second-order convergence rate in accuracy for all of the tested Dirichlet/Neumann BVPs when the anisotropy of the diffusion tensor is not too strong. PMID:23519600

  17. Haplotype Kernel Association Test as a Powerful Method to Identify Chromosomal Regions Harboring Uncommon Causal Variants

    PubMed Central

    Lin, Wan-Yu; Yi, Nengjun; Lou, Xiang-Yang; Zhi, Degui; Zhang, Kui; Gao, Guimin; Tiwari, Hemant K.; Liu, Nianjun

    2014-01-01

    For most complex diseases, the fraction of heritability that can be explained by the variants discovered from genome-wide association studies is minor. Although the so-called ‘rare variants’ (minor allele frequency [MAF] < 1%) have attracted increasing attention, they are unlikely to account for much of the ‘missing heritability’ because very few people may carry these rare variants. The genetic variants that are likely to fill in the ‘missing heritability’ include uncommon causal variants (MAF < 5%), which are generally untyped in association studies using tagging single-nucleotide polymorphisms (SNPs) or commercial SNP arrays. Developing powerful statistical methods can help to identify chromosomal regions harboring uncommon causal variants, while bypassing the genome-wide or exome-wide next-generation sequencing. In this work, we propose a haplotype kernel association test (HKAT) that is equivalent to testing the variance component of random effects for distinct haplotypes. With an appropriate weighting scheme given to haplotypes, we can further enhance the ability of HKAT to detect uncommon causal variants. With scenarios simulated according to the population genetics theory, HKAT is shown to be a powerful method for detecting chromosomal regions harboring uncommon causal variants. PMID:23740760

  18. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    NASA Astrophysics Data System (ADS)

    Xiang, Hao; Chen, Bin

    2015-02-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We0.28Fr0.78 (We is the Weber number, Fr is the Froude number).

  19. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    ERIC Educational Resources Information Center

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  20. Kernel bandwidth optimization in spike rate estimation.

    PubMed

    Shimazaki, Hideaki; Shinomoto, Shigeru

    2010-08-01

    Kernel smoother and a time-histogram are classical tools for estimating an instantaneous rate of spike occurrences. We recently established a method for selecting the bin width of the time-histogram, based on the principle of minimizing the mean integrated square error (MISE) between the estimated rate and unknown underlying rate. Here we apply the same optimization principle to the kernel density estimation in selecting the width or "bandwidth" of the kernel, and further extend the algorithm to allow a variable bandwidth, in conformity with data. The variable kernel has the potential to accurately grasp non-stationary phenomena, such as abrupt changes in the firing rate, which we often encounter in neuroscience. In order to avoid possible overfitting that may take place due to excessive freedom, we introduced a stiffness constant for bandwidth variability. Our method automatically adjusts the stiffness constant, thereby adapting to the entire set of spike data. It is revealed that the classical kernel smoother may exhibit goodness-of-fit comparable to, or even better than, that of modern sophisticated rate estimation methods, provided that the bandwidth is selected properly for a given set of spike data, according to the optimization methods presented here.

  1. Tomography, Adjoint Methods, Time-Reversal, and Banana-Doughnut Kernels

    NASA Astrophysics Data System (ADS)

    Tape, C.; Tromp, J.; Liu, Q.

    2004-12-01

    We demonstrate that Fréchet derivatives for tomographic inversions may be obtained based upon just two calculations for each earthquake: one calculation for the current model and a second, `adjoint', calculation that uses time-reversed signals at the receivers as simultaneous, fictitious sources. For a given model~m, we consider objective functions χ(m) that minimize differences between waveforms, traveltimes, or amplitudes. We show that the Fréchet derivatives of such objective functions may be written in the generic form δ χ=∫ VK_m( {x}) δ ln m( {x}) d3 {x}, where δ ln m=δ m/m denotes the relative model perturbation. The volumetric kernel Km is defined throughout the model volume V and is determined by time-integrated products between spatial and temporal derivatives of the regular displacement field {s} and the adjoint displacement field {s} obtained by using time-reversed signals at the receivers as simultaneous sources. In waveform tomography the time-reversed signal consists of differences between the data and the synthetics, in traveltime tomography it is determined by synthetic velocities, and in amplitude tomography it is controlled by synthetic displacements. For each event, the construction of the kernel Km requires one forward calculation for the regular field {s} and one adjoint calculation involving the fields {s} and {s}. For multiple events the kernels are simply summed. The final summed kernel is controlled by the distribution of events and stations and thus determines image resolution. In the case of traveltime tomography, the kernels Km are weighted combinations of banana-doughnut kernels. We demonstrate also how amplitude anomalies may be inverted for lateral variations in elastic and anelastic structure. The theory is illustrated based upon 2D spectral-element simulations.

  2. Study on preparation method of Zanthoxylum bungeanum seeds kernel oil with zero trans-fatty acids.

    PubMed

    Liu, Tong; Yao, Shi-Yong; Yin, Zhong-Yi; Zheng, Xu-Xu; Shen, Yu

    2016-04-01

    The seed of Zanthoxylum bungeanum (Z. bungeanum) is a by-product of pepper production and rich in unsaturated fatty acid, cellulose, and protein. The seed oil obtained from traditional producing process by squeezing or extracting would be bad quality and could not be used as edible oil. In this paper, a new preparation method of Z. bungeanum seed kernel oil (ZSKO) was developed by comparing the advantages and disadvantages of alkali saponification-cold squeezing, alkali saponification-solvent extraction, and alkali saponification-supercritical fluid extraction with carbon dioxide (SFE-CO2). The results showed that the alkali saponification-cold squeezing could be the optimal preparation method of ZSKO, which contained the following steps: Z. bungeanum seed was pretreated by alkali saponification under the conditions of adding 10 %NaOH (w/w), solution temperature was 80 °C, and saponification reaction time was 45 min, and pretreated seed was separated by filtering, water washing, and overnight drying at 50 °C, then repeated squeezing was taken until no oil generated at 60 °C with 15 % moisture content, and ZSKO was attained finally using centrifuge. The produced ZSKO contained more than 90 % unsaturated fatty acids and no trans-fatty acids and be testified as a good edible oil with low-value level of acid and peroxide. It was demonstrated that the alkali saponification-cold squeezing process could be scaled up and applied to industrialized production of ZSKO.

  3. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  4. The Method of Adaptive Comparative Judgement

    ERIC Educational Resources Information Center

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  5. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2015-12-22

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  6. Multiple Kernel Point Set Registration.

    PubMed

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  7. Effects of sample size on KERNEL home range estimates

    USGS Publications Warehouse

    Seaman, D.E.; Millspaugh, J.J.; Kernohan, Brian J.; Brundige, Gary C.; Raedeke, Kenneth J.; Gitzen, Robert A.

    1999-01-01

    Kernel methods for estimating home range are being used increasingly in wildlife research, but the effect of sample size on their accuracy is not known. We used computer simulations of 10-200 points/home range and compared accuracy of home range estimates produced by fixed and adaptive kernels with the reference (REF) and least-squares cross-validation (LSCV) methods for determining the amount of smoothing. Simulated home ranges varied from simple to complex shapes created by mixing bivariate normal distributions. We used the size of the 95% home range area and the relative mean squared error of the surface fit to assess the accuracy of the kernel home range estimates. For both measures, the bias and variance approached an asymptote at about 50 observations/home range. The fixed kernel with smoothing selected by LSCV provided the least-biased estimates of the 95% home range area. All kernel methods produced similar surface fit for most simulations, but the fixed kernel with LSCV had the lowest frequency and magnitude of very poor estimates. We reviewed 101 papers published in The Journal of Wildlife Management (JWM) between 1980 and 1997 that estimated animal home ranges. A minority of these papers used nonparametric utilization distribution (UD) estimators, and most did not adequately report sample sizes. We recommend that home range studies using kernel estimates use LSCV to determine the amount of smoothing, obtain a minimum of 30 observations per animal (but preferably a?Y50), and report sample sizes in published results.

  8. Kernel Optimization in Discriminant Analysis

    PubMed Central

    You, Di; Hamsici, Onur C.; Martinez, Aleix M.

    2011-01-01

    Kernel mapping is one of the most used approaches to intrinsically derive nonlinear classifiers. The idea is to use a kernel function which maps the original nonlinearly separable problem to a space of intrinsically larger dimensionality where the classes are linearly separable. A major problem in the design of kernel methods is to find the kernel parameters that make the problem linear in the mapped representation. This paper derives the first criterion that specifically aims to find a kernel representation where the Bayes classifier becomes linear. We illustrate how this result can be successfully applied in several kernel discriminant analysis algorithms. Experimental results using a large number of databases and classifiers demonstrate the utility of the proposed approach. The paper also shows (theoretically and experimentally) that a kernel version of Subclass Discriminant Analysis yields the highest recognition rates. PMID:20820072

  9. An accurate method for evaluating the kernel of the integral equation relating lift to downwash in unsteady potential flow

    NASA Technical Reports Server (NTRS)

    Desmarais, R. N.

    1982-01-01

    The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.

  10. Abundance estimation of solid and liquid mixtures in hyperspectral imagery with albedo-based and kernel-based methods

    NASA Astrophysics Data System (ADS)

    Rand, Robert S.; Resmini, Ronald G.; Allen, David W.

    2016-09-01

    This study investigates methods for characterizing materials that are mixtures of granular solids, or mixtures of liquids, which may be linear or non-linear. Linear mixtures of materials in a scene are often the result of areal mixing, where the pixel size of a sensor is relatively large so they contain patches of different materials within them. Non-linear mixtures are likely to occur with microscopic mixtures of solids, such as mixtures of powders, or mixtures of liquids, or wherever complex scattering of light occurs. This study considers two approaches for use as generalized methods for un-mixing pixels in a scene that may be linear or non-linear. One method is based on earlier studies that indicate non-linear mixtures in reflectance space are approximately linear in albedo space. This method converts reflectance to single-scattering albedo (SSA) according to Hapke theory assuming bidirectional scattering at nadir look angles and uses a constrained linear model on the computed albedo values. The other method is motivated by the same idea, but uses a kernel that seeks to capture the linear behavior of albedo in non-linear mixtures of materials. The behavior of the kernel method can be highly dependent on the value of a parameter, gamma, which provides flexibility for the kernel method to respond to both linear and non-linear phenomena. Our study pays particular attention to this parameter for responding to linear and non-linear mixtures. Laboratory experiments on both granular solids and liquid solutions are performed with scenes of hyperspectral data.

  11. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear.

    PubMed

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-12-20

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes' electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit's output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R² values for the quadric regression equation of 0.7853 and 0.8496.

  12. Nondestructive In Situ Measurement Method for Kernel Moisture Content in Corn Ear

    PubMed Central

    Zhang, Han-Lin; Ma, Qin; Fan, Li-Feng; Zhao, Peng-Fei; Wang, Jian-Xu; Zhang, Xiao-Dong; Zhu, De-Hai; Huang, Lan; Zhao, Dong-Jie; Wang, Zhong-Yi

    2016-01-01

    Moisture content is an important factor in corn breeding and cultivation. A corn breed with low moisture at harvest is beneficial for mechanical operations, reduces drying and storage costs after harvesting and, thus, reduces energy consumption. Nondestructive measurement of kernel moisture in an intact corn ear allows us to select corn varieties with seeds that have high dehydration speeds in the mature period. We designed a sensor using a ring electrode pair for nondestructive measurement of the kernel moisture in a corn ear based on a high-frequency detection circuit. Through experiments using the effective scope of the electrodes’ electric field, we confirmed that the moisture in the corn cob has little effect on corn kernel moisture measurement. Before the sensor was applied in practice, we investigated temperature and conductivity effects on the output impedance. Results showed that the temperature was linearly related to the output impedance (both real and imaginary parts) of the measurement electrodes and the detection circuit’s output voltage. However, the conductivity has a non-monotonic dependence on the output impedance (both real and imaginary parts) of the measurement electrodes and the output voltage of the high-frequency detection circuit. Therefore, we reduced the effect of conductivity on the measurement results through measurement frequency selection. Corn moisture measurement results showed a quadric regression between corn ear moisture and the imaginary part of the output impedance, and there is also a quadric regression between corn kernel moisture and the high-frequency detection circuit output voltage at 100 MHz. In this study, two corn breeds were measured using our sensor and gave R2 values for the quadric regression equation of 0.7853 and 0.8496. PMID:27999404

  13. A simple method for computing the relativistic Compton scattering kernel for radiative transfer

    NASA Technical Reports Server (NTRS)

    Prasad, M. K.; Kershaw, D. S.; Beason, J. D.

    1986-01-01

    Correct computation of the Compton scattering kernel (CSK), defined to be the Klein-Nishina differential cross section averaged over a relativistic Maxwellian electron distribution, is reported. The CSK is analytically reduced to a single integral, which can then be rapidly evaluated using a power series expansion, asymptotic series, and rational approximation for sigma(s). The CSK calculation has application to production codes that aim at understanding certain astrophysical, laser fusion, and nuclear weapons effects phenomena.

  14. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    SciTech Connect

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  15. A new method for evaluation of the resistance to rice kernel cracking based on moisture absorption in brown rice under controlled conditions

    PubMed Central

    Hayashi, Takeshi; Kobayashi, Asako; Tomita, Katsura; Shimizu, Toyohiro

    2015-01-01

    We developed and evaluated the effectiveness of a new method to detect differences among rice cultivars in their resistance to kernel cracking. The method induces kernel cracking under laboratory controlled condition by moisture absorption to brown rice. The optimal moisture absorption conditions were determined using two japonica cultivars, ‘Nipponbare’ as a cracking-resistant cultivar and ‘Yamahikari’ as a cracking-susceptible cultivar: 12% initial moisture content of the brown rice, a temperature of 25°C, a duration of 5 h, and only a single absorption treatment. We then evaluated the effectiveness of these conditions using 12 japonica cultivars. The proportion of cracked kernels was significantly correlated with the mean 10-day maximum temperature after heading. In addition, the correlation between the proportions of cracked kernels in the 2 years of the study was higher than that for values obtained using the traditional late harvest method. The new moisture absorption method could stably evaluate the resistance to kernel cracking, and will help breeders to develop future cultivars with less cracking of the kernels. PMID:26719740

  16. Power Series Approximation for the Correlation Kernel Leading to Kohn-Sham Methods Combining Accuracy, Computational Efficiency, and General Applicability

    NASA Astrophysics Data System (ADS)

    Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas

    2016-09-01

    A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.

  17. An Ensemble Approach to Building Mercer Kernels with Prior Information

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  18. Adaptive density estimator for galaxy surveys

    NASA Astrophysics Data System (ADS)

    Saar, Enn

    2016-10-01

    Galaxy number or luminosity density serves as a basis for many structure classification algorithms. Several methods are used to estimate this density. Among them kernel methods have probably the best statistical properties and allow also to estimate the local sample errors of the estimate. We introduce a kernel density estimator with an adaptive data-driven anisotropic kernel, describe its properties and demonstrate the wealth of additional information it gives us about the local properties of the galaxy distribution.

  19. Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings

    NASA Astrophysics Data System (ADS)

    Slavakis, Konstantinos; Theodoridis, Sergios

    2008-12-01

    Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.

  20. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  1. Local Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  2. KERNEL PHASE IN FIZEAU INTERFEROMETRY

    SciTech Connect

    Martinache, Frantz

    2010-11-20

    The detection of high contrast companions at small angular separation appears feasible in conventional direct images using the self-calibration properties of interferometric observable quantities. The friendly notion of closure phase, which is key to the recent observational successes of non-redundant aperture masking interferometry used with adaptive optics, appears to be one example of a wide family of observable quantities that are not contaminated by phase noise. In the high-Strehl regime, soon to be available thanks to the coming generation of extreme adaptive optics systems on ground-based telescopes, and already available from space, closure phase like information can be extracted from any direct image, even taken with a redundant aperture. These new phase-noise immune observable quantities, called kernel phases, are determined a priori from the knowledge of the geometry of the pupil only. Re-analysis of archive data acquired with the Hubble Space Telescope NICMOS instrument using this new kernel-phase algorithm demonstrates the power of the method as it clearly detects and locates with milliarcsecond precision a known companion to a star at angular separation less than the diffraction limit.

  3. Kernel principal component analysis residual diagnosis (KPCARD): An automated method for cosmic ray artifact removal in Raman spectra.

    PubMed

    Li, Boyan; Calvet, Amandine; Casamayou-Boucau, Yannick; Ryder, Alan G

    2016-03-24

    A new, fully automated, rapid method, referred to as kernel principal component analysis residual diagnosis (KPCARD), is proposed for removing cosmic ray artifacts (CRAs) in Raman spectra, and in particular for large Raman imaging datasets. KPCARD identifies CRAs via a statistical analysis of the residuals obtained at each wavenumber in the spectra. The method utilizes the stochastic nature of CRAs; therefore, the most significant components in principal component analysis (PCA) of large numbers of Raman spectra should not contain any CRAs. The process worked by first implementing kernel PCA (kPCA) on all the Raman mapping data and second accurately estimating the inter- and intra-spectrum noise to generate two threshold values. CRA identification was then achieved by using the threshold values to evaluate the residuals for each spectrum and assess if a CRA was present. CRA correction was achieved by spectral replacement where, the nearest neighbor (NN) spectrum, most spectroscopically similar to the CRA contaminated spectrum and principal components (PCs) obtained by kPCA were both used to generate a robust, best curve fit to the CRA contaminated spectrum. This best fit spectrum then replaced the CRA contaminated spectrum in the dataset. KPCARD efficacy was demonstrated by using simulated data and real Raman spectra collected from solid-state materials. The results showed that KPCARD was fast (<1 min per 8400 spectra), accurate, precise, and suitable for the automated correction of very large (>1 million) Raman datasets.

  4. Domain adaptive boosting method and its applications

    NASA Astrophysics Data System (ADS)

    Geng, Jie; Miao, Zhenjiang

    2015-03-01

    Differences of data distributions widely exist among datasets, i.e., domains. For many pattern recognition, nature language processing, and content-based analysis systems, a decrease in performance caused by the domain differences between the training and testing datasets is still a notable problem. We propose a domain adaptation method called domain adaptive boosting (DAB). It is based on the AdaBoost approach with extensions to cover the domain differences between the source and target domains. Two main stages are contained in this approach: source-domain clustering and source-domain sample selection. By iteratively adding the selected training samples from the source domain, the discrimination model is able to achieve better domain adaptation performance based on a small validation set. The DAB algorithm is suitable for the domains with large scale samples and easy to extend for multisource adaptation. We implement this method on three computer vision systems: the skin detection model in single images, the video concept detection model, and the object classification model. In the experiments, we compare the performances of several commonly used methods and the proposed DAB. Under most situations, the DAB is superior.

  5. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  6. The implementation of binned Kernel density estimation to determine open clusters' proper motions: validation of the method

    NASA Astrophysics Data System (ADS)

    Priyatikanto, R.; Arifyanto, M. I.

    2015-01-01

    Stellar membership determination of an open cluster is an important process to do before further analysis. Basically, there are two classes of membership determination method: parametric and non-parametric. In this study, an alternative of non-parametric method based on Binned Kernel Density Estimation that accounts measurements errors (simply called BKDE- e) is proposed. This method is applied upon proper motions data to determine cluster's membership kinematically and estimate the average proper motions of the cluster. Monte Carlo simulations show that the average proper motions determination using this proposed method is statistically more accurate than ordinary Kernel Density Estimator (KDE). By including measurement errors in the calculation, the mode location from the resulting density estimate is less sensitive to non-physical or stochastic fluctuation as compared to ordinary KDE that excludes measurement errors. For the typical mean measurement error of 7 mas/yr, BKDE- e suppresses the potential of miscalculation by a factor of two compared to KDE. With median accuracy of about 93 %, BKDE- e method has comparable accuracy with respect to parametric method (modified Sanders algorithm). Application to real data from The Fourth USNO CCD Astrograph Catalog (UCAC4), especially to NGC 2682 is also performed. The mode of member stars distribution on Vector Point Diagram is located at μ α cos δ=-9.94±0.85 mas/yr and μ δ =-4.92±0.88 mas/yr. Although the BKDE- e performance does not overtake parametric approach, it serves a new view of doing membership analysis, expandable to astrometric and photometric data or even in binary cluster search.

  7. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach.

  8. Modeling particle-laden turbulent flows with two-way coupling using a high-order kernel density function method

    NASA Astrophysics Data System (ADS)

    Smith, Timothy; Lu, Xiaoyi; Ranjan, Reetesh; Pantano, Carlos

    2016-11-01

    We describe a two-way coupled turbulent dispersed flow computational model using a high-order kernel density function (KDF) method. The carrier-phase solution is obtained using a high-order spatial and temporal incompressible Navier-Stokes solver while the KDF dispersed-phase solver uses the high-order Legendre WENO method. The computational approach is used to model carrier-phase turbulence modulation by the dispersed phase, and particle dispersion by turbulence as a function of momentum coupling strength (particle loading) and number of KDF basis functions. The use of several KDF's allows the model to capture statistical effects of particle trajectory crossing to high degree. Details of the numerical implementation and the coupling between the incompressible flow and dispersed-phase solvers will be discussed, and results at a range of Reynolds numbers will be presented. This work was supported by the National Science Foundation under Grant DMS-1318161.

  9. Robust signal reconstruction for condition monitoring of industrial components via a modified Auto Associative Kernel Regression method

    NASA Astrophysics Data System (ADS)

    Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico

    2015-08-01

    In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.

  10. Adaptive Method for Nonsmooth Nonnegative Matrix Factorization.

    PubMed

    Yang, Zuyuan; Xiang, Yong; Xie, Kan; Lai, Yue

    2017-04-01

    Nonnegative matrix factorization (NMF) is an emerging tool for meaningful low-rank matrix representation. In NMF, explicit constraints are usually required, such that NMF generates desired products (or factorizations), especially when the products have significant sparseness features. It is known that the ability of NMF in learning sparse representation can be improved by embedding a smoothness factor between the products. Motivated by this result, we propose an adaptive nonsmooth NMF (Ans-NMF) method in this paper. In our method, the embedded factor is obtained by using a data-related approach, so it matches well with the underlying products, implying a superior faithfulness of the representations. Besides, due to the usage of an adaptive selection scheme to this factor, the sparseness of the products can be separately constrained, leading to wider applicability and interpretability. Furthermore, since the adaptive selection scheme is processed through solving a series of typical linear programming problems, it can be easily implemented. Simulations using computer-generated data and real-world data show the advantages of the proposed Ans-NMF method over the state-of-the-art methods.

  11. Fully adaptive algorithms for multivariate integral equations using the non-standard form and multiwavelets with applications to the Poisson and bound-state Helmholtz kernels in three dimensions

    NASA Astrophysics Data System (ADS)

    Frediani, Luca; Fossgaard, Eirik; Flå, Tor; Ruud, Kenneth

    2013-07-01

    We have developed and implemented a general formalism for fast numerical solution of time-independent linear partial differential equations as well as integral equations through the application of numerically separable integral operators in d ≥ 1 dimensions using the non-standard (NS) form. The proposed formalism is universal, compact and oriented towards the practical implementation into a working code using multiwavelets. The formalism is applied to the case of Poisson and bound-state Helmholtz operators in d = 3. Our algorithms are fully adaptive in the sense that the grid supporting each function is obtained on the fly while the function is being computed. In particular, when the function g = O f is obtained by applying an integral operator O, the corresponding grid is not obtained by transferring the grid from the input function f. This aspect has significant implications that will be discussed in the numerical section. The operator kernels are represented in a separated form with finite but arbitrary precision using Gaussian functions. Such a representation combined with the NS form allows us to build a sparse, banded representation of Green's operator kernel. We have implemented a code for the application of such operators in a separated NS form to a multivariate function in a finite but, in principle, arbitrary number of dimensions. The error of the method is controlled, while the low complexity of the numerical algorithm is kept. The implemented code explicitly computes all the 22d components of the d-dimensional operator. Our algorithms are described in detail in the paper through pseudo-code examples. The final goal of our work is to be able to apply this method to build a fast and accurate Kohn-Sham solver for density functional theory.

  12. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  13. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  14. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    SciTech Connect

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-06-15

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  15. An Evaluation of the Kernel Equating Method: A Special Study with Pseudotests Constructed from Real Test Data. Research Report. ETS RR-06-02

    ERIC Educational Resources Information Center

    von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen

    2006-01-01

    This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…

  16. Adaptive envelope protection methods for aircraft

    NASA Astrophysics Data System (ADS)

    Unnikrishnan, Suraj

    Carefree handling refers to the ability of a pilot to operate an aircraft without the need to continuously monitor aircraft operating limits. At the heart of all carefree handling or maneuvering systems, also referred to as envelope protection systems, are algorithms and methods for predicting future limit violations. Recently, envelope protection methods that have gained more acceptance, translate limit proximity information to its equivalent in the control channel. Envelope protection algorithms either use very small prediction horizon or are static methods with no capability to adapt to changes in system configurations. Adaptive approaches maximizing prediction horizon such as dynamic trim, are only applicable to steady-state-response critical limit parameters. In this thesis, a new adaptive envelope protection method is developed that is applicable to steady-state and transient response critical limit parameters. The approach is based upon devising the most aggressive optimal control profile to the limit boundary and using it to compute control limits. Pilot-in-the-loop evaluations of the proposed approach are conducted at the Georgia Tech Carefree Maneuver lab for transient longitudinal hub moment limit protection. Carefree maneuvering is the dual of carefree handling in the realm of autonomous Uninhabited Aerial Vehicles (UAVs). Designing a flight control system to fully and effectively utilize the operational flight envelope is very difficult. With the increasing role and demands for extreme maneuverability there is a need for developing envelope protection methods for autonomous UAVs. In this thesis, a full-authority automatic envelope protection method is proposed for limit protection in UAVs. The approach uses adaptive estimate of limit parameter dynamics and finite-time horizon predictions to detect impending limit boundary violations. Limit violations are prevented by treating the limit boundary as an obstacle and by correcting nominal control

  17. Ensemble transform sensitivity method for adaptive observations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Xie, Yuanfu; Wang, Hongli; Chen, Dehui; Toth, Zoltan

    2016-01-01

    The Ensemble Transform (ET) method has been shown to be useful in providing guidance for adaptive observation deployment. It predicts forecast error variance reduction for each possible deployment using its corresponding transformation matrix in an ensemble subspace. In this paper, a new ET-based sensitivity (ETS) method, which calculates the gradient of forecast error variance reduction in terms of analysis error variance reduction, is proposed to specify regions for possible adaptive observations. ETS is a first order approximation of the ET; it requires just one calculation of a transformation matrix, increasing computational efficiency (60%-80% reduction in computational cost). An explicit mathematical formulation of the ETS gradient is derived and described. Both the ET and ETS methods are applied to the Hurricane Irene (2011) case and a heavy rainfall case for comparison. The numerical results imply that the sensitive areas estimated by the ETS and ET are similar. However, ETS is much more efficient, particularly when the resolution is higher and the number of ensemble members is larger.

  18. Generalized hidden-mapping ridge regression, knowledge-leveraged inductive transfer learning for neural networks, fuzzy systems and kernel methods.

    PubMed

    Deng, Zhaohong; Choi, Kup-Sze; Jiang, Yizhang; Wang, Shitong

    2014-12-01

    Inductive transfer learning has attracted increasing attention for the training of effective model in the target domain by leveraging the information in the source domain. However, most transfer learning methods are developed for a specific model, such as the commonly used support vector machine, which makes the methods applicable only to the adopted models. In this regard, the generalized hidden-mapping ridge regression (GHRR) method is introduced in order to train various types of classical intelligence models, including neural networks, fuzzy logical systems and kernel methods. Furthermore, the knowledge-leverage based transfer learning mechanism is integrated with GHRR to realize the inductive transfer learning method called transfer GHRR (TGHRR). Since the information from the induced knowledge is much clearer and more concise than that from the data in the source domain, it is more convenient to control and balance the similarity and difference of data distributions between the source and target domains. The proposed GHRR and TGHRR algorithms have been evaluated experimentally by performing regression and classification on synthetic and real world datasets. The results demonstrate that the performance of TGHRR is competitive with or even superior to existing state-of-the-art inductive transfer learning algorithms.

  19. Weighted Bergman kernels and virtual Bergman kernels

    NASA Astrophysics Data System (ADS)

    Roos, Guy

    2005-12-01

    We introduce the notion of "virtual Bergman kernel" and apply it to the computation of the Bergman kernel of "domains inflated by Hermitian balls", in particular when the base domain is a bounded symmetric domain.

  20. Adaptive method with intercessory feedback control for an intelligent agent

    DOEpatents

    Goldsmith, Steven Y.

    2004-06-22

    An adaptive architecture method with feedback control for an intelligent agent provides for adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. An adaptive architecture method with feedback control for multiple intelligent agents provides for coordinating and adaptively integrating reflexive and deliberative responses to a stimulus according to a goal. Re-programming of the adaptive architecture is through a nexus which coordinates reflexive and deliberator components.

  1. Adaptive Accommodation Control Method for Complex Assembly

    NASA Astrophysics Data System (ADS)

    Kang, Sungchul; Kim, Munsang; Park, Shinsuk

    Robotic systems have been used to automate assembly tasks in manufacturing and in teleoperation. Conventional robotic systems, however, have been ineffective in controlling contact force in multiple contact states of complex assemblythat involves interactions between complex-shaped parts. Unlike robots, humans excel at complex assembly tasks by utilizing their intrinsic impedance, forces and torque sensation, and tactile contact clues. By examining the human behavior in assembling complex parts, this study proposes a novel geometry-independent control method for robotic assembly using adaptive accommodation (or damping) algorithm. Two important conditions for complex assembly, target approachability and bounded contact force, can be met by the proposed control scheme. It generates target approachable motion that leads the object to move closer to a desired target position, while contact force is kept under a predetermined value. Experimental results from complex assembly tests have confirmed the feasibility and applicability of the proposed method.

  2. Total phenolics, antioxidant activity, and functional properties of 'Tommy Atkins' mango peel and kernel as affected by drying methods.

    PubMed

    Sogi, Dalbir Singh; Siddiq, Muhammad; Greiby, Ibrahim; Dolan, Kirk D

    2013-12-01

    Mango processing produces significant amount of waste (peels and kernels) that can be utilized for the production of value-added ingredients for various food applications. Mango peel and kernel were dried using different techniques, such as freeze drying, hot air, vacuum and infrared. Freeze dried mango waste had higher antioxidant properties than those from other techniques. The ORAC values of peel and kernel varied from 418-776 and 1547-1819 μmol TE/g db. The solubility of freeze dried peel and kernel powder was the highest. The water and oil absorption index of mango waste powders ranged between 1.83-6.05 and 1.66-3.10, respectively. Freeze dried powders had the lowest bulk density values among different techniques tried. The cabinet dried waste powders can be potentially used in food products to enhance their nutritional and antioxidant properties.

  3. Adapting implicit methods to parallel processors

    SciTech Connect

    Reeves, L.; McMillin, B.; Okunbor, D.; Riggins, D.

    1994-12-31

    When numerically solving many types of partial differential equations, it is advantageous to use implicit methods because of their better stability and more flexible parameter choice, (e.g. larger time steps). However, since implicit methods usually require simultaneous knowledge of the entire computational domain, these methods axe difficult to implement directly on distributed memory parallel processors. This leads to infrequent use of implicit methods on parallel/distributed systems. The usual implementation of implicit methods is inefficient due to the nature of parallel systems where it is common to take the computational domain and distribute the grid points over the processors so as to maintain a relatively even workload per processor. This creates a problem at the locations in the domain where adjacent points are not on the same processor. In order for the values at these points to be calculated, messages have to be exchanged between the corresponding processors. Without special adaptation, this will result in idle processors during part of the computation, and as the number of idle processors increases, the lower the effective speed improvement by using a parallel processor.

  4. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  5. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  6. Adaptive filtering for the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Marié, Simon; Gloerfelt, Xavier

    2017-03-01

    In this study, a new selective filtering technique is proposed for the Lattice Boltzmann Method. This technique is based on an adaptive implementation of the selective filter coefficient σ. The proposed model makes the latter coefficient dependent on the shear stress in order to restrict the use of the spatial filtering technique in sheared stress region where numerical instabilities may occur. Different parameters are tested on 2D test-cases sensitive to numerical stability and on a 3D decaying Taylor-Green vortex. The results are compared to the classical static filtering technique and to the use of a standard subgrid-scale model and give significant improvements in particular for low-order filter consistent with the LBM stencil.

  7. Combustion Control of Diesel Engine using Feedback Error Learning with Kernel Online Learning Approach

    NASA Astrophysics Data System (ADS)

    Widayaka, Elfady Satya; Ohmori, Hiromitsu

    2016-09-01

    This paper shows how to design Multivariable Model Reference Adaptive Control System (MRACS) for “Tokyo University discrete-time engine model” proposed by Yasuda et al (2014). This controller configuration has the structure of “Feedback error learning (FEL)” and adaptive law is based on kernel method. Simulation results indicate that “kernelized” adaptive controllers can improve the tracking performance, the speed of convergence and the robustness to disturbances.

  8. Ab initio-driven nuclear energy density functional method. A proposal for safe/correlated/improvable parametrizations of the off-diagonal EDF kernels

    NASA Astrophysics Data System (ADS)

    Duguet, T.; Bender, M.; Ebran, J.-P.; Lesinski, T.; Somà, V.

    2015-12-01

    This programmatic paper lays down the possibility to reconcile the necessity to resum many-body correlations into the energy kernel with the fact that safe multi-reference energy density functional (EDF) calculations cannot be achieved whenever the Pauli principle is not enforced, as is for example the case when many-body correlations are parametrized under the form of empirical density dependencies. Our proposal is to exploit a newly developed ab initio many-body formalism to guide the construction of safe, explicitly correlated and systematically improvable parametrizations of the off-diagonal energy and norm kernels that lie at the heart of the nuclear EDF method. The many-body formalism of interest relies on the concepts of symmetry breaking and restoration that have made the fortune of the nuclear EDF method and is, as such, amenable to this guidance. After elaborating on our proposal, we briefly outline the project we plan to execute in the years to come.

  9. Large-scale prediction of disulphide bridges using kernel methods, two-dimensional recursive neural networks, and weighted graph matching.

    PubMed

    Cheng, Jianlin; Saigo, Hiroto; Baldi, Pierre

    2006-03-15

    The formation of disulphide bridges between cysteines plays an important role in protein folding, structure, function, and evolution. Here, we develop new methods for predicting disulphide bridges in proteins. We first build a large curated data set of proteins containing disulphide bridges to extract relevant statistics. We then use kernel methods to predict whether a given protein chain contains intrachain disulphide bridges or not, and recursive neural networks to predict the bonding probabilities of each pair of cysteines in the chain. These probabilities in turn lead to an accurate estimation of the total number of disulphide bridges and to a weighted graph matching problem that can be addressed efficiently to infer the global disulphide bridge connectivity pattern. This approach can be applied both in situations where the bonded state of each cysteine is known, or in ab initio mode where the state is unknown. Furthermore, it can easily cope with chains containing an arbitrary number of disulphide bridges, overcoming one of the major limitations of previous approaches. It can classify individual cysteine residues as bonded or nonbonded with 87% specificity and 89% sensitivity. The estimate for the total number of bridges in each chain is correct 71% of the times, and within one from the true value over 94% of the times. The prediction of the overall disulphide connectivity pattern is exact in about 51% of the chains. In addition to using profiles in the input to leverage evolutionary information, including true (but not predicted) secondary structure and solvent accessibility information yields small but noticeable improvements. Finally, once the system is trained, predictions can be computed rapidly on a proteomic or protein-engineering scale. The disulphide bridge prediction server (DIpro), software, and datasets are available through www.igb.uci.edu/servers/psss.html.

  10. Biological Rhythms Modelisation of Vigilance and Sleep in Microgravity State with COSINOR and Volterra's Kernels Methods

    NASA Astrophysics Data System (ADS)

    Gaudeua de Gerlicz, C.; Golding, J. G.; Bobola, Ph.; Moutarde, C.; Naji, S.

    2008-06-01

    The spaceflight under microgravity cause basically biological and physiological imbalance in human being. Lot of study has been yet release on this topic especially about sleep disturbances and on the circadian rhythms (alternation vigilance-sleep, body, temperature...). Factors like space motion sickness, noise, or excitement can cause severe sleep disturbances. For a stay of longer than four months in space, gradual increases in the planned duration of sleep were reported. [1] The average sleep in orbit was more than 1.5 hours shorter than the during control periods on earth, where sleep averaged 7.9 hours. [2] Alertness and calmness were unregistered yield clear circadian pattern of 24h but with a phase delay of 4h.The calmness showed a biphasic component (12h) mean sleep duration was 6.4 structured by 3-5 non REM/REM cycles. Modelisations of neurophysiologic mechanisms of stress and interactions between various physiological and psychological variables of rhythms have can be yet release with the COSINOR method. [3

  11. Do we really need a large number of particles to simulate bimolecular reactive transport with random walk methods? A kernel density estimation approach

    NASA Astrophysics Data System (ADS)

    Rahbaralam, Maryam; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier

    2015-12-01

    Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 106-109, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, but also may lead to an improper reproduction of the mixing process, limited by diffusion. Recent works have used this effect as a proxy to model incomplete mixing in porous media. In this work, we propose using a Kernel Density Estimation (KDE) of the concentrations that allows getting the expected results for a well-mixed solution with a limited number of particles. The idea consists of treating each particle as a sample drawn from the pool of molecules that it represents; this way, the actual location of a tracked particle is seen as a sample drawn from the density function of the location of molecules represented by that given particle, rigorously represented by a kernel density function. The probability of reaction can be obtained by combining the kernels associated to two potentially reactive particles. We demonstrate that the observed deviation in the reaction vs time curves in numerical experiments reported in the literature could be attributed to the statistical method used to reconstruct concentrations (fixed particle support) from discrete particle distributions, and not to the occurrence of true incomplete mixing. We further explore the evolution of the kernel size with time, linking it to the diffusion process. Our results show that KDEs are powerful tools to improve computational efficiency and robustness in reactive transport simulations, and indicates that incomplete mixing in diluted systems should be modeled based on alternative mechanistic models and not on a

  12. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  13. Estimation of mass ratio of the total kernels within a sample of in-shell peanuts using RF Impedance Method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    It would be useful to know the total kernel mass within a given mass of peanuts (mass ratio) while the peanuts are bought or being processed. In this work, the possibility of finding this mass ratio while the peanuts were in their shells was investigated. Capacitance, phase angle and dissipation fa...

  14. New Equating Methods and Their Relationships with Levine Observed Score Linear Equating under the Kernel Equating Framework

    ERIC Educational Resources Information Center

    Chen, Haiwen; Holland, Paul

    2010-01-01

    In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…

  15. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  16. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  17. Improving the Bandwidth Selection in Kernel Equating

    ERIC Educational Resources Information Center

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  18. Robotic intelligence kernel

    DOEpatents

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  19. Cross-domain question classification in community question answering via kernel mapping

    NASA Astrophysics Data System (ADS)

    Su, Lei; Hu, Zuoliang; Yang, Bin; Li, Yiyang; Chen, Jun

    2015-10-01

    An increasingly popular method for retrieving information is via the community question answering (CQA) systems such as Yahoo! Answers and Baidu Knows. In CQA, question classification plays an important role to find the answers. However, the labeled training examples for statistical question classifier are fairly expensive to obtain, as they require the experienced human efforts. Meanwhile, unlabeled data are readily available. This paper employs the method of domain adaptation via kernel mapping to solve this problem. In detail, the kernel approach is utilized to map the target-domain data and the source-domain data into a common space, where the question classifiers are trained under the closer conditional probabilities. The kernel mapping function is constructed by domain knowledge. Therefore, domain knowledge could be transferred from the labeled examples in the source domain to the unlabeled ones in the targeted domain. The statistical training model can be improved by using a large number of unlabeled data. Meanwhile, the Hadoop Platform is used to construct the mapping mechanism to reduce the time complexity. Map/Reduce enable kernel mapping for domain adaptation in parallel in the Hadoop Platform. Experimental results show that the accuracy of question classification could be improved by the method of kernel mapping. Furthermore, the parallel method in the Hadoop Platform could effective schedule the computing resources to reduce the running time.

  20. A Method for Severely Constrained Item Selection in Adaptive Testing.

    ERIC Educational Resources Information Center

    Stocking, Martha L.; Swanson, Len

    1993-01-01

    A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)

  1. Solution-adaptive finite element method in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1993-01-01

    Some recent results obtained using solution-adaptive finite element method in linear elastic two-dimensional fracture mechanics problems are presented. The focus is on the basic issue of adaptive finite element method for validating the applications of new methodology to fracture mechanics problems by computing demonstration problems and comparing the stress intensity factors to analytical results.

  2. Adaptive method for electron bunch profile prediction

    SciTech Connect

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET. © 2015 authors. Published by the American Physical Society.

  3. Adaptive method for electron bunch profile prediction

    NASA Astrophysics Data System (ADS)

    Scheinker, Alexander; Gessner, Spencer

    2015-10-01

    We report on an experiment performed at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC National Accelerator Laboratory, in which a new adaptive control algorithm, one with known, bounded update rates, despite operating on analytically unknown cost functions, was utilized in order to provide quasi-real-time bunch property estimates of the electron beam. Multiple parameters, such as arbitrary rf phase settings and other time-varying accelerator properties, were simultaneously tuned in order to match a simulated bunch energy spectrum with a measured energy spectrum. The simple adaptive scheme was digitally implemented using matlab and the experimental physics and industrial control system. The main result is a nonintrusive, nondestructive, real-time diagnostic scheme for prediction of bunch profiles, as well as other beam parameters, the precise control of which are important for the plasma wakefield acceleration experiments being explored at FACET.

  4. Adaptive finite element methods in electrochemistry.

    PubMed

    Gavaghan, David J; Gillow, Kathryn; Süli, Endre

    2006-12-05

    In this article, we review some of our previous work that considers the general problem of numerical simulation of the currents at microelectrodes using an adaptive finite element approach. Microelectrodes typically consist of an electrode embedded (or recessed) in an insulating material. For all such electrodes, numerical simulation is made difficult by the presence of a boundary singularity at the electrode edge (where the electrode meets the insulator), manifested by the large increase in the current density at this point, often referred to as the edge effect. Our approach to overcoming this problem has involved the derivation of an a posteriori bound on the error in the numerical approximation for the current that can be used to drive an adaptive mesh-generation algorithm, allowing calculation of the quantity of interest (the current) to within a prescribed tolerance. We illustrate the generic applicability of the approach by considering a broad range of steady-state applications of the technique.

  5. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  6. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  7. Adaptive methods, rolling contact, and nonclassical friction laws

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1989-01-01

    Results and methods on three different areas of contemporary research are outlined. These include adaptive methods, the rolling contact problem for finite deformation of a hyperelastic or viscoelastic cylinder, and non-classical friction laws for modeling dynamic friction phenomena.

  8. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    PubMed

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods.

  9. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology

    PubMed Central

    Poon, Art F.Y.

    2015-01-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this “kernel-ABC” method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. PMID:26006189

  10. Semisupervised kernel matrix learning by kernel propagation.

    PubMed

    Hu, Enliang; Chen, Songcan; Zhang, Daoqiang; Yin, Xuesong

    2010-11-01

    The goal of semisupervised kernel matrix learning (SS-KML) is to learn a kernel matrix on all the given samples on which just a little supervised information, such as class label or pairwise constraint, is provided. Despite extensive research, the performance of SS-KML still leaves some space for improvement in terms of effectiveness and efficiency. For example, a recent pairwise constraints propagation (PCP) algorithm has formulated SS-KML into a semidefinite programming (SDP) problem, but its computation is very expensive, which undoubtedly restricts PCPs scalability in practice. In this paper, a novel algorithm, called kernel propagation (KP), is proposed to improve the comprehensive performance in SS-KML. The main idea of KP is first to learn a small-sized sub-kernel matrix (named seed-kernel matrix) and then propagate it into a larger-sized full-kernel matrix. Specifically, the implementation of KP consists of three stages: 1) separate the supervised sample (sub)set X(l) from the full sample set X; 2) learn a seed-kernel matrix on X(l) through solving a small-scale SDP problem; and 3) propagate the learnt seed-kernel matrix into a full-kernel matrix on X . Furthermore, following the idea in KP, we naturally develop two conveniently realizable out-of-sample extensions for KML: one is batch-style extension, and the other is online-style extension. The experiments demonstrate that KP is encouraging in both effectiveness and efficiency compared with three state-of-the-art algorithms and its related out-of-sample extensions are promising too.

  11. An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)

    DTIC Science & Technology

    2011-04-13

    Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different

  12. Adaptable radiation monitoring system and method

    DOEpatents

    Archer, Daniel E.; Beauchamp, Brock R.; Mauger, G. Joseph; Nelson, Karl E.; Mercer, Michael B.; Pletcher, David C.; Riot, Vincent J.; Schek, James L.; Knapp, David A.

    2006-06-20

    A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.

  13. Adaptive computational methods for aerothermal heating analysis

    NASA Technical Reports Server (NTRS)

    Price, John M.; Oden, J. Tinsley

    1988-01-01

    The development of adaptive gridding techniques for finite-element analysis of fluid dynamics equations is described. The developmental work was done with the Euler equations with concentration on shock and inviscid flow field capturing. Ultimately this methodology is to be applied to a viscous analysis for the purpose of predicting accurate aerothermal loads on complex shapes subjected to high speed flow environments. The development of local error estimate strategies as a basis for refinement strategies is discussed, as well as the refinement strategies themselves. The application of the strategies to triangular elements and a finite-element flux-corrected-transport numerical scheme are presented. The implementation of these strategies in the GIM/PAGE code for 2-D and 3-D applications is documented and demonstrated.

  14. An adaptive pseudospectral method for discontinuous problems

    NASA Technical Reports Server (NTRS)

    Augenbaum, Jeffrey M.

    1988-01-01

    The accuracy of adaptively chosen, mapped polynomial approximations is studied for functions with steep gradients or discontinuities. It is shown that, for steep gradient functions, one can obtain spectral accuracy in the original coordinate system by using polynomial approximations in a transformed coordinate system with substantially fewer collocation points than are necessary using polynomial expansion directly in the original, physical, coordinate system. It is also shown that one can avoid the usual Gibbs oscillation associated with steep gradient solutions of hyperbolic pde's by approximation in suitably chosen coordinate systems. Continuous, high gradient solutions are computed with spectral accuracy (as measured in the physical coordinate system). Discontinuous solutions associated with nonlinear hyperbolic equations can be accurately computed by using an artificial viscosity chosen to smooth out the solution in the mapped, computational domain. Thus, shocks can be effectively resolved on a scale that is subgrid to the resolution available with collocation only in the physical domain. Examples with Fourier and Chebyshev collocation are given.

  15. Density Estimation with Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Macready, William G.

    2003-01-01

    We present a new method for density estimation based on Mercer kernels. The density estimate can be understood as the density induced on a data manifold by a mixture of Gaussians fit in a feature space. As is usual, the feature space and data manifold are defined with any suitable positive-definite kernel function. We modify the standard EM algorithm for mixtures of Gaussians to infer the parameters of the density. One benefit of the approach is it's conceptual simplicity, and uniform applicability over many different types of data. Preliminary results are presented for a number of simple problems.

  16. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  17. Hybrid Adaptive Ray-Moment Method (HARM2): A highly parallel method for radiation hydrodynamics on adaptive grids

    NASA Astrophysics Data System (ADS)

    Rosen, A. L.; Krumholz, M. R.; Oishi, J. S.; Lee, A. T.; Klein, R. I.

    2017-02-01

    We present a highly-parallel multi-frequency hybrid radiation hydrodynamics algorithm that combines a spatially-adaptive long characteristics method for the radiation field from point sources with a moment method that handles the diffuse radiation field produced by a volume-filling fluid. Our Hybrid Adaptive Ray-Moment Method (HARM2) operates on patch-based adaptive grids, is compatible with asynchronous time stepping, and works with any moment method. In comparison to previous long characteristics methods, we have greatly improved the parallel performance of the adaptive long-characteristics method by developing a new completely asynchronous and non-blocking communication algorithm. As a result of this improvement, our implementation achieves near-perfect scaling up to O (103) processors on distributed memory machines. We present a series of tests to demonstrate the accuracy and performance of the method.

  18. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  19. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  20. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    NASA Astrophysics Data System (ADS)

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-01

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. The results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  1. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    DOE PAGES

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; ...

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) tomore » evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.« less

  2. Kernel-Based Equiprobabilistic Topographic Map Formation.

    PubMed

    Van Hulle MM

    1998-09-15

    We introduce a new unsupervised competitive learning rule, the kernel-based maximum entropy learning rule (kMER), which performs equiprobabilistic topographic map formation in regular, fixed-topology lattices, for use with nonparametric density estimation as well as nonparametric regression analysis. The receptive fields of the formal neurons are overlapping radially symmetric kernels, compatible with radial basis functions (RBFs); but unlike other learning schemes, the radii of these kernels do not have to be chosen in an ad hoc manner: the radii are adapted to the local input density, together with the weight vectors that define the kernel centers, so as to produce maps of which the neurons have an equal probability to be active (equiprobabilistic maps). Both an "online" and a "batch" version of the learning rule are introduced, which are applied to nonparametric density estimation and regression, respectively. The application envisaged is blind source separation (BSS) from nonlinear, noisy mixtures.

  3. Dynamic Adaptive Runtime Systems for Advanced Multipole Method-based Science Achievement

    NASA Astrophysics Data System (ADS)

    Debuhr, Jackson; Anderson, Matthew; Sterling, Thomas; Zhang, Bo

    2015-04-01

    Multipole methods are a key computational kernel for a large class of scientific applications spanning multiple disciplines. Yet many of these applications are strong scaling constrained when using conventional programming practices. Hardware parallelism continues to grow, emphasizing medium and fine-grained thread parallelism rather than the coarse-grained process parallelism favored by conventional programming practices. Emerging, dynamic task management execution models can go beyond these conventional practices to significantly improve both efficiency and scalability for algorithms like multipole methods which exhibit irregular and time-varying execution properties. We present a new scientific library, DASHMM, built on the ParalleX HPX-5 runtime system, which explores the use of dynamic adaptive runtime techniques to improve scalability and efficiency for multipole-method based scientific computing. DASHMM allows application scientists to rapidly create custom, scalable, and efficient multipole methods, especially targeting the Fast Multipole Method and the Barnes-Hut N-body algorithm. After a discussion of the system and its goals, some application examples will be presented.

  4. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    PubMed

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  5. Self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation theorem and the exact-exchange kernel

    SciTech Connect

    Bleiziffer, Patrick Krug, Marcel; Görling, Andreas

    2015-06-28

    A self-consistent Kohn-Sham method based on the adiabatic-connection fluctuation-dissipation (ACFD) theorem, employing the frequency-dependent exact exchange kernel f{sub x} is presented. The resulting SC-exact-exchange-only (EXX)-ACFD method leads to even more accurate correlation potentials than those obtained within the direct random phase approximation (dRPA). In contrast to dRPA methods, not only the Coulomb kernel but also the exact exchange kernel f{sub x} is taken into account in the EXX-ACFD correlation which results in a method that, unlike dRPA methods, is free of self-correlations, i.e., a method that treats exactly all one-electron systems, like, e.g., the hydrogen atom. The self-consistent evaluation of EXX-ACFD total energies improves the accuracy compared to EXX-ACFD total energies evaluated non-self-consistently with EXX or dRPA orbitals and eigenvalues. Reaction energies of a set of small molecules, for which highly accurate experimental reference data are available, are calculated and compared to quantum chemistry methods like Møller-Plesset perturbation theory of second order (MP2) or coupled cluster methods [CCSD, coupled cluster singles, doubles, and perturbative triples (CCSD(T))]. Moreover, we compare our methods to other ACFD variants like dRPA combined with perturbative corrections such as the second order screened exchange corrections or a renormalized singles correction. Similarly, the performance of our EXX-ACFD methods is investigated for the non-covalently bonded dimers of the S22 reference set and for potential energy curves of noble gas, water, and benzene dimers. The computational effort of the SC-EXX-ACFD method exhibits the same scaling of N{sup 5} with respect to the system size N as the non-self-consistent evaluation of only the EXX-ACFD correlation energy; however, the prefactor increases significantly. Reaction energies from the SC-EXX-ACFD method deviate quite little from EXX-ACFD energies obtained non

  6. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  7. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  8. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    PubMed

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  9. A steady and oscillatory kernel function method for interfering surfaces in subsonic, transonic and supersonic flow. [prediction analysis techniques for airfoils

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1976-01-01

    The theory, results and user instructions for an aerodynamic computer program are presented. The theory is based on linear lifting surface theory, and the method is the kernel function. The program is applicable to multiple interfering surfaces which may be coplanar or noncoplanar. Local linearization was used to treat nonuniform flow problems without shocks. For cases with imbedded shocks, the appropriate boundary conditions were added to account for the flow discontinuities. The data describing nonuniform flow fields must be input from some other source such as an experiment or a finite difference solution. The results are in the form of small linear perturbations about nonlinear flow fields. The method was applied to a wide variety of problems for which it is demonstrated to be significantly superior to the uniform flow method. Program user instructions are given for easy access.

  10. A New Kernel-Based Fuzzy Level Set Method for Automated Segmentation of Medical Images in the Presence of Intensity Inhomogeneity

    PubMed Central

    Shanbehzadeh, Jamshid

    2014-01-01

    Researchers recently apply an integrative approach to automate medical image segmentation for benefiting available methods and eliminating their disadvantages. Intensity inhomogeneity is a challenging and open problem in this area, which has received less attention by this approach. It has considerable effects on segmentation accuracy. This paper proposes a new kernel-based fuzzy level set algorithm by an integrative approach to deal with this problem. It can directly evolve from the initial level set obtained by Gaussian Kernel-Based Fuzzy C-Means (GKFCM). The controlling parameters of level set evolution are also estimated from the results of GKFCM. Moreover the proposed algorithm is enhanced with locally regularized evolution based on an image model that describes the composition of real-world images, in which intensity inhomogeneity is assumed as a component of an image. Such improvements make level set manipulation easier and lead to more robust segmentation in intensity inhomogeneity. The proposed algorithm has valuable benefits including automation, invariant of intensity inhomogeneity, and high accuracy. Performance evaluation of the proposed algorithm was carried on medical images from different modalities. The results confirm its effectiveness for medical image segmentation. PMID:24624225

  11. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  12. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    PubMed

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  13. Iterative software kernels

    SciTech Connect

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  14. Learning with Box Kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-04-12

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, since the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given which dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  15. Learning with box kernels.

    PubMed

    Melacci, Stefano; Gori, Marco

    2013-11-01

    Supervised examples and prior knowledge on regions of the input space have been profitably integrated in kernel machines to improve the performance of classifiers in different real-world contexts. The proposed solutions, which rely on the unified supervision of points and sets, have been mostly based on specific optimization schemes in which, as usual, the kernel function operates on points only. In this paper, arguments from variational calculus are used to support the choice of a special class of kernels, referred to as box kernels, which emerges directly from the choice of the kernel function associated with a regularization operator. It is proven that there is no need to search for kernels to incorporate the structure deriving from the supervision of regions of the input space, because the optimal kernel arises as a consequence of the chosen regularization operator. Although most of the given results hold for sets, we focus attention on boxes, whose labeling is associated with their propositional description. Based on different assumptions, some representer theorems are given that dictate the structure of the solution in terms of box kernel expansion. Successful results are given for problems of medical diagnosis, image, and text categorization.

  16. Kernel Affine Projection Algorithms

    NASA Astrophysics Data System (ADS)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  17. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  18. Multivariate phenotype association analysis by marker-set kernel machine regression.

    PubMed

    Maity, Arnab; Sullivan, Patrick F; Tzeng, Jun-Ying

    2012-11-01

    Genetic studies of complex diseases often collect multiple phenotypes relevant to the disorders. As these phenotypes can be correlated and share common genetic mechanisms, jointly analyzing these traits may bring more power to detect genes influencing individual or multiple phenotypes. Given the advancement brought by the multivariate phenotype approaches and the multimarker kernel machine regression, we construct a multivariate regression based on kernel machine to facilitate the joint evaluation of multimarker effects on multiple phenotypes. The kernel machine serves as a powerful dimension-reduction tool to capture complex effects among markers. The multivariate framework incorporates the potentially correlated multidimensional phenotypic information and accommodates common or different environmental covariates for each trait. We derive the multivariate kernel machine test based on a score-like statistic, and conduct simulations to evaluate the validity and efficacy of the method. We also study the performance of the commonly adapted strategies for kernel machine analysis on multiple phenotypes, including the multiple univariate kernel machine tests with original phenotypes or with their principal components. Our results suggest that none of these approaches has the uniformly best power, and the optimal test depends on the magnitude of the phenotype correlation and the effect patterns. However, the multivariate test retains to be a reasonable approach when the multiple phenotypes have none or mild correlations, and gives the best power once the correlation becomes stronger or when there exist genes that affect more than one phenotype. We illustrate the utility of the multivariate kernel machine method through the Clinical Antipsychotic Trails of Intervention Effectiveness antibody study.

  19. Online Sequential Extreme Learning Machine With Kernels.

    PubMed

    Scardapane, Simone; Comminiello, Danilo; Scarpiniti, Michele; Uncini, Aurelio

    2015-09-01

    The extreme learning machine (ELM) was recently proposed as a unifying framework for different families of learning algorithms. The classical ELM model consists of a linear combination of a fixed number of nonlinear expansions of the input vector. Learning in ELM is hence equivalent to finding the optimal weights that minimize the error on a dataset. The update works in batch mode, either with explicit feature mappings or with implicit mappings defined by kernels. Although an online version has been proposed for the former, no work has been done up to this point for the latter, and whether an efficient learning algorithm for online kernel-based ELM exists remains an open problem. By explicating some connections between nonlinear adaptive filtering and ELM theory, in this brief, we present an algorithm for this task. In particular, we propose a straightforward extension of the well-known kernel recursive least-squares, belonging to the kernel adaptive filtering (KAF) family, to the ELM framework. We call the resulting algorithm the kernel online sequential ELM (KOS-ELM). Moreover, we consider two different criteria used in the KAF field to obtain sparse filters and extend them to our context. We show that KOS-ELM, with their integration, can result in a highly efficient algorithm, both in terms of obtained generalization error and training time. Empirical evaluations demonstrate interesting results on some benchmarking datasets.

  20. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  1. Studies of an Adaptive Kaczmarz Method for Electrical Impedance Imaging

    NASA Astrophysics Data System (ADS)

    Li, Taoran; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-04-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of memory storage in large scale problems, we propose to solve the inverse problem by adaptively updating both the optimal current pattern with improved distinguishability and the conductivity estimate at each iteration. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation and the Kaczmarz method can produce accurate and stable solutions adaptively compared to traditional Kaczmarz and Gauss-Newton type methods. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results.

  2. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  3. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  4. An improved adaptive IHS method for image fusion

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    2015-12-01

    An improved adaptive intensity-hue-saturation (IHS) method is proposed for image fusion in this paper based on the adaptive IHS (AIHS) method and its improved method(IAIHS). Through improved method, the weighting matrix, which decides how many spatial details in the panchromatic (Pan) image should be injected into the multispectral (MS) image, is defined on the basis of the linear relationship of the edges of Pan and MS image. At the same time, a modulation parameter t is used to balance the spatial resolution and spectral resolution of the fusion image. Experiments showed that the improved method can improve spectral quality and maintain spatial resolution compared with the AIHS and IAIHS methods.

  5. RKF-PCA: robust kernel fuzzy PCA.

    PubMed

    Heo, Gyeongyong; Gader, Paul; Frigui, Hichem

    2009-01-01

    Principal component analysis (PCA) is a mathematical method that reduces the dimensionality of the data while retaining most of the variation in the data. Although PCA has been applied in many areas successfully, it suffers from sensitivity to noise and is limited to linear principal components. The noise sensitivity problem comes from the least-squares measure used in PCA and the limitation to linear components originates from the fact that PCA uses an affine transform defined by eigenvectors of the covariance matrix and the mean of the data. In this paper, a robust kernel PCA method that extends the kernel PCA and uses fuzzy memberships is introduced to tackle the two problems simultaneously. We first introduce an iterative method to find robust principal components, called Robust Fuzzy PCA (RF-PCA), which has a connection with robust statistics and entropy regularization. The RF-PCA method is then extended to a non-linear one, Robust Kernel Fuzzy PCA (RKF-PCA), using kernels. The modified kernel used in the RKF-PCA satisfies the Mercer's condition, which means that the derivation of the K-PCA is also valid for the RKF-PCA. Formal analyses and experimental results suggest that the RKF-PCA is an efficient non-linear dimension reduction method and is more noise-robust than the original kernel PCA.

  6. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  7. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  8. Multiple collaborative kernel tracking.

    PubMed

    Fan, Zhimin; Yang, Ming; Wu, Ying

    2007-07-01

    Those motion parameters that cannot be recovered from image measurements are unobservable in the visual dynamic system. This paper studies this important issue of singularity in the context of kernel-based tracking and presents a novel approach that is based on a motion field representation which employs redundant but sparsely correlated local motion parameters instead of compact but uncorrelated global ones. This approach makes it easy to design fully observable kernel-based motion estimators. This paper shows that these high-dimensional motion fields can be estimated efficiently by the collaboration among a set of simpler local kernel-based motion estimators, which makes the new approach very practical.

  9. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  10. Adaptive computational methods for SSME internal flow analysis

    NASA Technical Reports Server (NTRS)

    Oden, J. T.

    1986-01-01

    Adaptive finite element methods for the analysis of classes of problems in compressible and incompressible flow of interest in SSME (space shuttle main engine) analysis and design are described. The general objective of the adaptive methods is to improve and to quantify the quality of numerical solutions to the governing partial differential equations of fluid dynamics in two-dimensional cases. There are several different families of adaptive schemes that can be used to improve the quality of solutions in complex flow simulations. Among these are: (1) r-methods (node-redistribution or moving mesh methods) in which a fixed number of nodal points is allowed to migrate to points in the mesh where high error is detected; (2) h-methods, in which the mesh size h is automatically refined to reduce local error; and (3) p-methods, in which the local degree p of the finite element approximation is increased to reduce local error. Two of the three basic techniques have been studied in this project: an r-method for steady Euler equations in two dimensions and a p-method for transient, laminar, viscous incompressible flow. Numerical results are presented. A brief introduction to residual methods of a-posterior error estimation is also given and some pertinent conclusions of the study are listed.

  11. Adaptive clustering and adaptive weighting methods to detect disease associated rare variants.

    PubMed

    Sha, Qiuying; Wang, Shuaicheng; Zhang, Shuanglin

    2013-03-01

    Current statistical methods to test association between rare variants and phenotypes are essentially the group-wise methods that collapse or aggregate all variants in a predefined group into a single variant. Comparing with the variant-by-variant methods, the group-wise methods have their advantages. However, two factors may affect the power of these methods. One is that some of the causal variants may be protective. When both risk and protective variants are presented, it will lose power by collapsing or aggregating all variants because the effects of risk and protective variants will counteract each other. The other is that not all variants in the group are causal; rather, a large proportion is believed to be neutral. When a large proportion of variants are neutral, collapsing or aggregating all variants may not be an optimal solution. We propose two alternative methods, adaptive clustering (AC) method and adaptive weighting (AW) method, aiming to test rare variant association in the presence of neutral and/or protective variants. Both of AC and AW are applicable to quantitative traits as well as qualitative traits. Results of extensive simulation studies show that AC and AW have similar power and both of them have clear advantages from power to computational efficiency comparing with existing group-wise methods and existing data-driven methods that allow neutral and protective variants. We recommend AW method because AW method is computationally more efficient than AC method.

  12. A generalized pyramid matching kernel for human action recognition in realistic videos.

    PubMed

    Zhu, Jun; Zhou, Quan; Zou, Weijia; Zhang, Rui; Zhang, Wenjun

    2013-10-24

    Human action recognition is an increasingly important research topic in the fields of video sensing, analysis and understanding. Caused by unconstrained sensing conditions, there exist large intra-class variations and inter-class ambiguities in realistic videos, which hinder the improvement of recognition performance for recent vision-based action recognition systems. In this paper, we propose a generalized pyramid matching kernel (GPMK) for recognizing human actions in realistic videos, based on a multi-channel "bag of words" representation constructed from local spatial-temporal features of video clips. As an extension to the spatial-temporal pyramid matching (STPM) kernel, the GPMK leverages heterogeneous visual cues in multiple feature descriptor types and spatial-temporal grid granularity levels, to build a valid similarity metric between two video clips for kernel-based classification. Instead of the predefined and fixed weights used in STPM, we present a simple, yet effective, method to compute adaptive channel weights of GPMK based on the kernel target alignment from training data. It incorporates prior knowledge and the data-driven information of different channels in a principled way. The experimental results on three challenging video datasets (i.e., Hollywood2, Youtube and HMDB51) validate the superiority of our GPMK w.r.t. the traditional STPM kernel for realistic human action recognition and outperform the state-of-the-art results in the literature.

  13. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  14. New developments in adaptive methods for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Oden, J. T.; Bass, Jon M.

    1990-01-01

    New developments in a posteriori error estimates, smart algorithms, and h- and h-p adaptive finite element methods are discussed in the context of two- and three-dimensional compressible and incompressible flow simulations. Applications to rotor-stator interaction, rotorcraft aerodynamics, shock and viscous boundary layer interaction and fluid-structure interaction problems are discussed.

  15. Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.

    ERIC Educational Resources Information Center

    Butler, Ronald W.

    The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…

  16. A Conditional Exposure Control Method for Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Finkelman, Matthew; Nering, Michael L.; Roussos, Louis A.

    2009-01-01

    In computerized adaptive testing (CAT), ensuring the security of test items is a crucial practical consideration. A common approach to reducing item theft is to define maximum item exposure rates, i.e., to limit the proportion of examinees to whom a given item can be administered. Numerous methods for controlling exposure rates have been proposed…

  17. Robotic Intelligence Kernel: Communications

    SciTech Connect

    Walton, Mike C.

    2009-09-16

    The INL Robotic Intelligence Kernel-Comms is the communication server that transmits information between one or more robots using the RIK and one or more user interfaces. It supports event handling and multiple hardware communication protocols.

  18. The Palomar kernel-phase experiment: testing kernel phase interferometry for ground-based astronomical observations

    NASA Astrophysics Data System (ADS)

    Pope, Benjamin; Tuthill, Peter; Hinkley, Sasha; Ireland, Michael J.; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D.; Martinache, Frantz

    2016-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently developed adaptive optics post-processing technique, called kernel-phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here, we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the α Ophiuchi binary system near periastron, using the Palomar 200-Inch Telescope. This is the first case in which kernel phase has been used with a full aperture to resolve a system close to the diffraction limit with ground-based extreme adaptive optics observations. Excellent agreement in astrometric quantities is found between kernel phase and masking, and kernel phase significantly outperforms PSF fitting and bispectral analysis, demonstrating its viability as an alternative to conventional non-redundant masking under appropriate conditions.

  19. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  20. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  1. Robotic Intelligence Kernel: Driver

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Driver is built on top of the RIK-A and implements a dynamic autonomy structure. The RIK-D is used to orchestrate hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a single cognitive behavior kernel that provides intrinsic intelligence for a wide variety of unmanned ground vehicle systems.

  2. Method and system for environmentally adaptive fault tolerant computing

    NASA Technical Reports Server (NTRS)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  3. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  4. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  5. Quantum kernel applications in medicinal chemistry.

    PubMed

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  6. A Novel Framework for Learning Geometry-Aware Kernels.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  7. Free energy calculations: an efficient adaptive biasing potential method.

    PubMed

    Dickson, Bradley M; Legoll, Frédéric; Lelièvre, Tony; Stoltz, Gabriel; Fleurat-Lessard, Paul

    2010-05-06

    We develop an efficient sampling and free energy calculation technique within the adaptive biasing potential (ABP) framework. By mollifying the density of states we obtain an approximate free energy and an adaptive bias potential that is computed directly from the population along the coordinates of the free energy. Because of the mollifier, the bias potential is "nonlocal", and its gradient admits a simple analytic expression. A single observation of the reaction coordinate can thus be used to update the approximate free energy at every point within a neighborhood of the observation. This greatly reduces the equilibration time of the adaptive bias potential. This approximation introduces two parameters: strength of mollification and the zero of energy of the bias potential. While we observe that the approximate free energy is a very good estimate of the actual free energy for a large range of mollification strength, we demonstrate that the errors associated with the mollification may be removed via deconvolution. The zero of energy of the bias potential, which is easy to choose, influences the speed of convergence but not the limiting accuracy. This method is simple to apply to free energy or mean force computation in multiple dimensions and does not involve second derivatives of the reaction coordinates, matrix manipulations nor on-the-fly adaptation of parameters. For the alanine dipeptide test case, the new method is found to gain as much as a factor of 10 in efficiency as compared to two basic implementations of the adaptive biasing force methods, and it is shown to be as efficient as well-tempered metadynamics with the postprocess deconvolution giving a clear advantage to the mollified density of states method.

  8. An Adaptive Cross-Architecture Combination Method for Graph Traversal

    SciTech Connect

    You, Yang; Song, Shuaiwen; Kerbyson, Darren J.

    2014-06-18

    Breadth-First Search (BFS) is widely used in many real-world applications including computational biology, social networks, and electronic design automation. The combination method, using both top-down and bottom-up techniques, is the most effective BFS approach. However, current combination methods rely on trial-and-error and exhaustive search to locate the optimal switching point, which may cause significant runtime overhead. To solve this problem, we design an adaptive method based on regression analysis to predict an optimal switching point for the combination method at runtime within less than 0.1% of the BFS execution time.

  9. Adaptive Kaczmarz Method for Image Reconstruction in Electrical Impedance Tomography

    PubMed Central

    Li, Taoran; Kao, Tzu-Jen; Isaacson, David; Newell, Jonathan C.; Saulnier, Gary J.

    2013-01-01

    We present an adaptive Kaczmarz method for solving the inverse problem in electrical impedance tomography and determining the conductivity distribution inside an object from electrical measurements made on the surface. To best characterize an unknown conductivity distribution and avoid inverting the Jacobian-related term JTJ which could be expensive in terms of computation cost and memory in large scale problems, we propose solving the inverse problem by applying the optimal current patterns for distinguishing the actual conductivity from the conductivity estimate between each iteration of the block Kaczmarz algorithm. With a novel subset scheme, the memory-efficient reconstruction algorithm which appropriately combines the optimal current pattern generation with the Kaczmarz method can produce more accurate and stable solutions adaptively as compared to traditional Kaczmarz and Gauss-Newton type methods. Choices of initial current pattern estimates are discussed in the paper. Several reconstruction image metrics are used to quantitatively evaluate the performance of the simulation results. PMID:23718952

  10. New variable selection method using interval segmentation purity with application to blockwise kernel transform support vector machine classification of high-dimensional microarray data.

    PubMed

    Tang, Li-Juan; Du, Wen; Fu, Hai-Yan; Jiang, Jian-Hui; Wu, Hai-Long; Shen, Guo-Li; Yu, Ru-Qin

    2009-08-01

    One problem with discriminant analysis of microarray data is representation of each sample by a large number of genes that are possibly irrelevant, insignificant, or redundant. Methods of variable selection are, therefore, of great significance in microarray data analysis. A new method for key gene selection has been proposed on the basis of interval segmentation purity that is defined as the purity of samples belonging to a certain class in intervals segmented by a mode search algorithm. This method identifies key variables most discriminative for each class, which offers possibility of unraveling the biological implication of selected genes. A salient advantage of the new strategy over existing methods is the capability of selecting genes that, though possibly exhibit a multimodal distribution, are the most discriminative for the classes of interest, considering that the expression levels of some genes may reflect systematic difference in within-class samples derived from different pathogenic mechanisms. On the basis of the key genes selected for individual classes, a support vector machine with block-wise kernel transform is developed for the classification of different classes. The combination of the proposed gene mining approach with support vector machine is demonstrated in cancer classification using two public data sets. The results reveal that significant genes have been identified for each class, and the classification model shows satisfactory performance in training and prediction for both data sets.

  11. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    SciTech Connect

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  12. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.

  13. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  14. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  15. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Technical Reports Server (NTRS)

    Kallinderis, Y.

    1995-01-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  16. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  17. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Kallinderis, Y.

    1995-10-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  18. Space-time adaptive numerical methods for geophysical applications.

    PubMed

    Castro, C E; Käser, M; Toro, E F

    2009-11-28

    In this paper we present high-order formulations of the finite volume and discontinuous Galerkin finite-element methods for wave propagation problems with a space-time adaptation technique using unstructured meshes in order to reduce computational cost without reducing accuracy. Both methods can be derived in a similar mathematical framework and are identical in their first-order version. In their extension to higher order accuracy in space and time, both methods use spatial polynomials of higher degree inside each element, a high-order solution of the generalized Riemann problem and a high-order time integration method based on the Taylor series expansion. The static adaptation strategy uses locally refined high-resolution meshes in areas with low wave speeds to improve the approximation quality. Furthermore, the time step length is chosen locally adaptive such that the solution is evolved explicitly in time by an optimal time step determined by a local stability criterion. After validating the numerical approach, both schemes are applied to geophysical wave propagation problems such as tsunami waves and seismic waves comparing the new approach with the classical global time-stepping technique. The problem of mesh partitioning for large-scale applications on multi-processor architectures is discussed and a new mesh partition approach is proposed and tested to further reduce computational cost.

  19. Developing new online calibration methods for multidimensional computerized adaptive testing.

    PubMed

    Chen, Ping; Wang, Chun; Xin, Tao; Chang, Hua-Hua

    2017-02-01

    Multidimensional computerized adaptive testing (MCAT) has received increasing attention over the past few years in educational measurement. Like all other formats of CAT, item replenishment is an essential part of MCAT for its item bank maintenance and management, which governs retiring overexposed or obsolete items over time and replacing them with new ones. Moreover, calibration precision of the new items will directly affect the estimation accuracy of examinees' ability vectors. In unidimensional CAT (UCAT) and cognitive diagnostic CAT, online calibration techniques have been developed to effectively calibrate new items. However, there has been very little discussion of online calibration in MCAT in the literature. Thus, this paper proposes new online calibration methods for MCAT based upon some popular methods used in UCAT. Three representative methods, Method A, the 'one EM cycle' method and the 'multiple EM cycles' method, are generalized to MCAT. Three simulation studies were conducted to compare the three new methods by manipulating three factors (test length, item bank design, and level of correlation between coordinate dimensions). The results showed that all the new methods were able to recover the item parameters accurately, and the adaptive online calibration designs showed some improvements compared to the random design under most conditions.

  20. A simplified self-adaptive grid method, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, C.; Venkatapathy, E.

    1989-01-01

    The formulation of the Self-Adaptive Grid Evolution (SAGE) code, based on the work of Nakahashi and Deiwert, is described in the first section of this document. The second section is presented in the form of a user guide which explains the input and execution of the code, and provides many examples. Application of the SAGE code, by Ames Research Center and by others, in the solution of various flow problems has been an indication of the code's general utility and success. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for single, zonal, and multiple grids. Modifications to the methodology and the simplified input options make this current version a flexible and user-friendly code.

  1. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  2. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  3. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    NASA Astrophysics Data System (ADS)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  4. Reversed-phase vortex-assisted liquid-liquid microextraction: a new sample preparation method for the determination of amygdalin in oil and kernel samples.

    PubMed

    Hosseini, Mohammad; Heydari, Rouhollah; Alimoradi, Mohammad

    2015-02-01

    A novel, simple, and rapid reversed-phase vortex-assisted liquid-liquid microextraction coupled with high-performance liquid chromatography has been introduced for the extraction, clean-up, and preconcentration of amygdalin in oil and kernel samples. In this technique, deionized water was used as the extracting solvent. Unlike the reversed-phase dispersive liquid-liquid microextraction, dispersive solvent was eliminated in the proposed method. Various parameters that affected the extraction efficiency, such as extracting solvent volume and its pH, vortex, and centrifuging times were evaluated and optimized. The calibration curve shows good linearity (r(2) = 0.9955) and precision (RSD < 5.2%) in the range of 0.07-20 μg/mL. The limit of detection and limit of quantitation were 0.02 and 0.07 μg/mL, respectively. The recoveries were in the range of 96.0-102.0% with relative standard deviation values ranging from 4.0 to 5.1%. Unlike the conventional extraction methods for plant extracts, no evaporative and re-solubilizing operations were needed in the proposed technique.

  5. Adaptive optics image restoration algorithm based on wavefront reconstruction and adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Li, Dongming; Zhang, Lijuan; Wang, Ting; Liu, Huan; Yang, Jinhua; Chen, Guifen

    2016-11-01

    To improve the adaptive optics (AO) image's quality, we study the AO image restoration algorithm based on wavefront reconstruction technology and adaptive total variation (TV) method in this paper. Firstly, the wavefront reconstruction using Zernike polynomial is used for initial estimated for the point spread function (PSF). Then, we develop our proposed iterative solutions for AO images restoration, addressing the joint deconvolution issue. The image restoration experiments are performed to verify the image restoration effect of our proposed algorithm. The experimental results show that, compared with the RL-IBD algorithm and Wiener-IBD algorithm, we can see that GMG measures (for real AO image) from our algorithm are increased by 36.92%, and 27.44% respectively, and the computation time are decreased by 7.2%, and 3.4% respectively, and its estimation accuracy is significantly improved.

  6. Grid adaptation and remapping for arbitrary lagrangian eulerian (ALE) methods

    SciTech Connect

    Lapenta, G. M.

    2002-01-01

    Methods to include automatic grid adaptation tools within the Arbitrary Lagrangian Eulerian (ALE) method are described. Two main developments will be described. First, a new grid adaptation approach is described, based on an automatic and accurate estimate of the local truncation error. Second, a new method to remap the information between two grids is presented, based on the MPDATA approach. The Arbitrary Lagrangian Eulerian (ALE) method solves hyperbolic equations by splitting the operators is two phases. First, in the Lagrangian phase, the equations under consideration are written in a Lagrangian frame and are discretized. In this phase, the grid moves with the solution, the velocity of each node being the local fluid velocity. Second, in the Eulerian phase, a new grid is generated and the information is transferred to the new grid. The advantage of considering this second step is the possibility of avoiding mesh distortion and tangling typical of pure Lagrangian methods. The second phase of the ALE method is the primary topic of the present communication. In the Eulerian phase two tasks need to be completed. First, a new grid need to be created (we will refer to this task as rezoning). Second, the information is transferred from the grid available at the end of the Lagrangian phase to the new grid (we will refer to this task as remapping). New techniques are presented for the two tasks of the Eulerian phase: rezoning and remapping.

  7. A novel adaptive force control method for IPMC manipulation

    NASA Astrophysics Data System (ADS)

    Hao, Lina; Sun, Zhiyong; Li, Zhi; Su, Yunquan; Gao, Jianchao

    2012-07-01

    IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable.

  8. Kernel mucking in top

    SciTech Connect

    LeFebvre, W.

    1994-08-01

    For many years, the popular program top has aided system administrations in examination of process resource usage on their machines. Yet few are familiar with the techniques involved in obtaining this information. Most of what is displayed by top is available only in the dark recesses of kernel memory. Extracting this information requires familiarity not only with how bytes are read from the kernel, but also what data needs to be read. The wide variety of systems and variants of the Unix operating system in today`s marketplace makes writing such a program very challenging. This paper explores the tremendous diversity in kernel information across the many platforms and the solutions employed by top to achieve and maintain ease of portability in the presence of such divergent systems.

  9. Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.

    1979-01-01

    The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.

  10. Kernel-Based Reconstruction of Graph Signals

    NASA Astrophysics Data System (ADS)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  11. A two-dimensional adaptive mesh generation method

    NASA Astrophysics Data System (ADS)

    Altas, Irfan; Stephenson, John W.

    1991-05-01

    The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.

  12. An adaptive penalty method for DIRECT algorithm in engineering optimization

    NASA Astrophysics Data System (ADS)

    Vilaça, Rita; Rocha, Ana Maria A. C.

    2012-09-01

    The most common approach for solving constrained optimization problems is based on penalty functions, where the constrained problem is transformed into a sequence of unconstrained problem by penalizing the objective function when constraints are violated. In this paper, we analyze the implementation of an adaptive penalty method, within the DIRECT algorithm, in which the constraints that are more difficult to be satisfied will have relatively higher penalty values. In order to assess the applicability and performance of the proposed method, some benchmark problems from engineering design optimization are considered.

  13. Adaptive Current Control Method for Hybrid Active Power Filter

    NASA Astrophysics Data System (ADS)

    Chau, Minh Thuyen

    2016-09-01

    This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.

  14. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  15. A novel adaptive noise filtering method for SAR images

    NASA Astrophysics Data System (ADS)

    Li, Weibin; He, Mingyi

    2009-08-01

    In the most application situation, signal or image always is corrupted by additive noise. As a result there are mass methods to remove the additive noise while few approaches can work well for the multiplicative noise. The paper presents an improved MAP-based filter for multiplicative noise by adaptive window denoising technique. A Gamma noise models is discussed and a preprocessing technique to differential the matured and un-matured pixel is applied to get accurate estimate for Equivalent Number of Looks. Also the adaptive local window growth and 3 different denoise strategies are applied to smooth noise while keep its subtle information according to its local statistics feature. The simulation results show that the performance is better than existing filter. Several image experiments demonstrate its theoretical performance.

  16. A Kernel-based Account of Bibliometric Measures

    NASA Astrophysics Data System (ADS)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  17. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  18. Adaptation of fast marching methods to intracellular signaling

    NASA Astrophysics Data System (ADS)

    Chikando, Aristide C.; Kinser, Jason M.

    2006-02-01

    Imaging of signaling phenomena within the intracellular domain is a well studied field. Signaling is the process by which all living cells communicate with their environment and with each other. In the case of signaling calcium waves, numerous computational models based on solving homogeneous reaction diffusion equations have been developed. Typically, the reaction diffusion approach consists of solving systems of partial differential equations at each update step. The traditional methods used to solve these reaction diffusion equations are very computationally expensive since they must employ small time steps in order to reduce the computational error. The presented research suggests the application of fast marching methods to imaging signaling calcium waves, more specifically fertilization calcium waves, in Xenopus laevis eggs. The fast marching approach provides fast and efficient means of tracking the evolution of monotonically advancing fronts. A model that employs biophysical properties of intracellular calcium signaling, and adapts fast marching methods to tracking the propagation of signaling calcium waves is presented. The developed model is used to reproduce simulation results obtained with reaction diffusion based model. Results obtained with our model agree with both the results obtained with reaction diffusion based models, and confocal microscopy observations during in vivo experiments. The adaptation of fast marching methods to intracellular protein or macromolecule trafficking is also briefly explored.

  19. Robust time and frequency domain estimation methods in adaptive control

    NASA Technical Reports Server (NTRS)

    Lamaire, Richard Orville

    1987-01-01

    A robust identification method was developed for use in an adaptive control system. The type of estimator is called the robust estimator, since it is robust to the effects of both unmodeled dynamics and an unmeasurable disturbance. The development of the robust estimator was motivated by a need to provide guarantees in the identification part of an adaptive controller. To enable the design of a robust control system, a nominal model as well as a frequency-domain bounding function on the modeling uncertainty associated with this nominal model must be provided. Two estimation methods are presented for finding parameter estimates, and, hence, a nominal model. One of these methods is based on the well developed field of time-domain parameter estimation. In a second method of finding parameter estimates, a type of weighted least-squares fitting to a frequency-domain estimated model is used. The frequency-domain estimator is shown to perform better, in general, than the time-domain parameter estimator. In addition, a methodology for finding a frequency-domain bounding function on the disturbance is used to compute a frequency-domain bounding function on the additive modeling error due to the effects of the disturbance and the use of finite-length data. The performance of the robust estimator in both open-loop and closed-loop situations is examined through the use of simulations.

  20. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  1. Robotic Intelligence Kernel: Architecture

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel Architecture (RIK-A) is a multi-level architecture that supports a dynamic autonomy structure. The RIK-A is used to coalesce hardware for sensing and action as well as software components for perception, communication, behavior and world modeling into a framework that can be used to create behaviors for humans to interact with the robot.

  2. Robotic Intelligence Kernel: Visualization

    SciTech Connect

    2009-09-16

    The INL Robotic Intelligence Kernel-Visualization is the software that supports the user interface. It uses the RIK-C software to communicate information to and from the robot. The RIK-V illustrates the data in a 3D display and provides an operating picture wherein the user can task the robot.

  3. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    NASA Astrophysics Data System (ADS)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  4. A decentralized adaptive robust method for chaos control.

    PubMed

    Kobravi, Hamid-Reza; Erfanian, Abbas

    2009-09-01

    This paper presents a control strategy, which is based on sliding mode control, adaptive control, and fuzzy logic system for controlling the chaotic dynamics. We consider this control paradigm in chaotic systems where the equations of motion are not known. The proposed control strategy is robust against the external noise disturbance and system parameter variations and can be used to convert the chaotic orbits not only to the desired periodic ones but also to any desired chaotic motions. Simulation results of controlling some typical higher order chaotic systems demonstrate the effectiveness of the proposed control method.

  5. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation

  6. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  7. Geometric tree kernels: classification of COPD from airway tree geometry.

    PubMed

    Feragen, Aasa; Petersen, Jens; Grimm, Dominik; Dirksen, Asger; Pedersen, Jesper Holst; Borgwardt, Karsten; de Bruijne, Marleen

    2013-01-01

    Methodological contributions: This paper introduces a family of kernels for analyzing (anatomical) trees endowed with vector valued measurements made along the tree. While state-of-the-art graph and tree kernels use combinatorial tree/graph structure with discrete node and edge labels, the kernels presented in this paper can include geometric information such as branch shape, branch radius or other vector valued properties. In addition to being flexible in their ability to model different types of attributes, the presented kernels are computationally efficient and some of them can easily be computed for large datasets (N - 10.000) of trees with 30 - 600 branches. Combining the kernels with standard machine learning tools enables us to analyze the relation between disease and anatomical tree structure and geometry. Experimental results: The kernels are used to compare airway trees segmented from low-dose CT, endowed with branch shape descriptors and airway wall area percentage measurements made along the tree. Using kernelized hypothesis testing we show that the geometric airway trees are significantly differently distributed in patients with Chronic Obstructive Pulmonary Disease (COPD) than in healthy individuals. The geometric tree kernels also give a significant increase in the classification accuracy of COPD from geometric tree structure endowed with airway wall thickness measurements in comparison with state-of-the-art methods, giving further insight into the relationship between airway wall thickness and COPD. Software: Software for computing kernels and statistical tests is available at http://image.diku.dk/aasa/software.php.

  8. Turbulence profiling methods applied to ESO's adaptive optics facility

    NASA Astrophysics Data System (ADS)

    Valenzuela, Javier; Béchet, Clémentine; Garcia-Rissmann, Aurea; Gonté, Frédéric; Kolb, Johann; Le Louarn, Miska; Neichel, Benoît; Madec, Pierre-Yves; Guesalaga, Andrés.

    2014-07-01

    Two algorithms were recently studied for C2n profiling from wide-field Adaptive Optics (AO) measurements on GeMS (Gemini Multi-Conjugate AO system). They both rely on the Slope Detection and Ranging (SLODAR) approach, using spatial covariances of the measurements issued from various wavefront sensors. The first algorithm estimates the C2n profile by applying the truncated least-squares inverse of a matrix modeling the response of slopes covariances to various turbulent layer heights. In the second method, the profile is estimated by deconvolution of these spatial cross-covariances of slopes. We compare these methods in the new configuration of ESO Adaptive Optics Facility (AOF), a high-order multiple laser system under integration. For this, we use measurements simulated by the AO cluster of ESO. The impact of the measurement noise and of the outer scale of the atmospheric turbulence is analyzed. The important influence of the outer scale on the results leads to the development of a new step for outer scale fitting included in each algorithm. This increases the reliability and robustness of the turbulence strength and profile estimations.

  9. An adaptive stepsize method for the chemical Langevin equation.

    PubMed

    Ilie, Silvana; Teslya, Alexandra

    2012-05-14

    Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.

  10. High speed sorting of Fusarium-damaged wheat kernels

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recent studies have found that resistance to Fusarium fungal infection can be inherited in wheat from one generation to another. However, there is not yet available a cost effective method to separate Fusarium-damaged wheat kernels from undamaged kernels so that wheat breeders can take advantage of...

  11. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    ERIC Educational Resources Information Center

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  12. Kernel regression based feature extraction for 3D MR image denoising.

    PubMed

    López-Rubio, Ezequiel; Florentín-Núñez, María Nieves

    2011-08-01

    Kernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem. Our noise removal system is rooted on a zeroth order 3D kernel regression, which computes a weighted average of the pixels over a regression window. We propose to obtain the weights from the similarities among small sized feature vectors associated to each pixel. In turn, these features come from a second order 3D kernel regression estimation of the original image values and gradient vectors. By considering directional information in the weight computation, this approach substantially enhances the performance of the filter. Moreover, Rician noise level is automatically estimated without any need of human intervention, i.e. our method is fully automated. Experimental results over synthetic and real images demonstrate that our proposal achieves good performance with respect to the other MRI denoising filters being compared.

  13. Reduction in redundancy of multichannel telemetric information by the method of adaptive discretization with associative sorting

    NASA Technical Reports Server (NTRS)

    Kantor, A. V.; Timonin, V. G.; Azarova, Y. S.

    1974-01-01

    The method of adaptive discretization is the most promising for elimination of redundancy from telemetry messages characterized by signal shape. Adaptive discretization with associative sorting was considered as a way to avoid the shortcomings of adaptive discretization with buffer smoothing and adaptive discretization with logical switching in on-board information compression devices (OICD) in spacecraft. Mathematical investigations of OICD are presented.

  14. Fractal Weyl law for Linux Kernel architecture

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  15. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  16. Research on PGNAA adaptive analysis method with BP neural network

    NASA Astrophysics Data System (ADS)

    Peng, Ke-Xin; Yang, Jian-Bo; Tuo, Xian-Guo; Du, Hua; Zhang, Rui-Xue

    2016-11-01

    A new approach method to dealing with the puzzle of spectral analysis in prompt gamma neutron activation analysis (PGNAA) is developed and demonstrated. It consists of utilizing BP neural network to PGNAA energy spectrum analysis which is based on Monte Carlo (MC) simulation, the main tasks which we will accomplish as follows: (1) Completing the MC simulation of PGNAA spectrum library, we respectively set mass fractions of element Si, Ca, Fe from 0.00 to 0.45 with a step of 0.05 and each sample is simulated using MCNP. (2) Establishing the BP model of adaptive quantitative analysis of PGNAA energy spectrum, we calculate peak areas of eight characteristic gamma rays that respectively correspond to eight elements in each individual of 1000 samples and that of the standard sample. (3) Verifying the viability of quantitative analysis of the adaptive algorithm where 68 samples were used successively. Results show that the precision when using neural network to calculate the content of each element is significantly higher than the MCLLS.

  17. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  18. Gaussian kernel width optimization for sparse Bayesian learning.

    PubMed

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  19. Sparse diffraction imaging method using an adaptive reweighting homotopy algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Qiu, Zhen

    2017-02-01

    Seismic diffractions carry valuable information from subsurface small-scale geologic discontinuities, such as faults, cavities and other features associated with hydrocarbon reservoirs. However, seismic imaging methods mainly use reflection theory for constructing imaging models, which means a smooth constraint on imaging conditions. In fact, diffractors occupy a small account of distributions in an imaging model and possess discontinuous characteristics. In mathematics, this kind of phenomena can be described by the sparse optimization theory. Therefore, we propose a diffraction imaging method based on a sparsity-constraint model for studying diffractors. A reweighted L 2-norm and L 1-norm minimization model is investigated, where the L 2 term requests a least-square error between modeled diffractions and observed diffractions and the L 1 term imposes sparsity on the solution. In order to efficiently solve this model, we use an adaptive reweighting homotopy algorithm that updates the solutions by tracking a path along inexpensive homotopy steps. Numerical examples and field data application demonstrate the feasibility of the proposed method and show its significance for detecting small-scale discontinuities in a seismic section. The proposed method has an advantage in improving the focusing ability of diffractions and reducing the migration artifacts.

  20. An adaptive Cartesian grid generation method for Dirty geometry

    NASA Astrophysics Data System (ADS)

    Wang, Z. J.; Srinivasan, Kumar

    2002-07-01

    Traditional structured and unstructured grid generation methods need a water-tight boundary surface grid to start. Therefore, these methods are named boundary to interior (B2I) approaches. Although these methods have achieved great success in fluid flow simulations, the grid generation process can still be very time consuming if non-water-tight geometries are given. Significant user time can be taken to repair or clean a dirty geometry with cracks, overlaps or invalid manifolds before grid generation can take place. In this paper, we advocate a different approach in grid generation, namely the interior to boundary (I2B) approach. With an I2B approach, the computational grid is first generated inside the computational domain. Then this grid is intelligently connected to the boundary, and the boundary grid is a result of this connection. A significant advantage of the I2B approach is that dirty geometries can be handled without cleaning or repairing, dramatically reducing grid generation time. An I2B adaptive Cartesian grid generation method is developed in this paper to handle dirty geometries without geometry repair. Comparing with a B2I approach, the grid generation time with the I2B approach for a complex automotive engine can be reduced by three orders of magnitude. Copyright

  1. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    PubMed

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  2. Full Waveform Inversion Using Waveform Sensitivity Kernels

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  3. A method of camera calibration with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Yan, Shu-hua; Wang, Guo-chao; Zhou, Chun-lei

    2009-07-01

    In order to calculate the parameters of the camera correctly, we must figure out the accurate coordinates of the certain points in the image plane. Corners are the important features in the 2D images. Generally speaking, they are the points that have high curvature and lie in the junction of different brightness regions of images. So corners detection has already widely used in many fields. In this paper we use the pinhole camera model and SUSAN corner detection algorithm to calibrate the camera. When using the SUSAN corner detection algorithm, we propose an approach to retrieve the gray difference threshold, adaptively. That makes it possible to pick up the right chessboard inner comers in all kinds of gray contrast. The experiment result based on this method was proved to be feasible.

  4. A forward method for optimal stochastic nonlinear and adaptive control

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    1988-01-01

    A computational approach is taken to solve the optimal nonlinear stochastic control problem. The approach is to systematically solve the stochastic dynamic programming equations forward in time, using a nested stochastic approximation technique. Although computationally intensive, this provides a straightforward numerical solution for this class of problems and provides an alternative to the usual dimensionality problem associated with solving the dynamic programming equations backward in time. It is shown that the cost degrades monotonically as the complexity of the algorithm is reduced. This provides a strategy for suboptimal control with clear performance/computation tradeoffs. A numerical study focusing on a generic optimal stochastic adaptive control example is included to demonstrate the feasibility of the method.

  5. Computing the roots of complex orthogonal and kernel polynomials

    SciTech Connect

    Saylor, P.E.; Smolarski, D.C.

    1988-01-01

    A method is presented to compute the roots of complex orthogonal and kernel polynomials. An important application of complex kernel polynomials is the acceleration of iterative methods for the solution of nonsymmetric linear equations. In the real case, the roots of orthogonal polynomials coincide with the eigenvalues of the Jacobi matrix, a symmetric tridiagonal matrix obtained from the defining three-term recurrence relationship for the orthogonal polynomials. In the real case kernel polynomials are orthogonal. The Stieltjes procedure is an algorithm to compute the roots of orthogonal and kernel polynomials bases on these facts. In the complex case, the Jacobi matrix generalizes to a Hessenberg matrix, the eigenvalues of which are roots of either orthogonal or kernel polynomials. The resulting algorithm generalizes the Stieljes procedure. It may not be defined in the case of kernel polynomials, a consequence of the fact that they are orthogonal with respect to a nonpositive bilinear form. (Another consequence is that kernel polynomials need not be of exact degree.) A second algorithm that is always defined is presented for kernel polynomials. Numerical examples are described.

  6. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.

  7. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  8. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  9. Kernel machine SNP-set testing under multiple candidate kernels.

    PubMed

    Wu, Michael C; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M; Harmon, Quaker E; Lin, Xinyi; Engel, Stephanie M; Molldrem, Jeffrey J; Armistead, Paul M

    2013-04-01

    Joint testing for the cumulative effect of multiple single-nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large-scale genetic association studies. The kernel machine (KM)-testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori because this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest P-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power vs. using the best candidate kernel.

  10. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  11. Evaluation of Adaptive Subdivision Method on Mobile Device

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Isa, Siti Aida Mohd; Rehman, Amjad; Saba, Tanzila

    2013-06-01

    Recently, there are significant improvements in the capabilities of mobile devices; but rendering large 3D object is still tedious because of the constraint in resources of mobile devices. To reduce storage requirement, 3D object is simplified but certain area of curvature is compromised and the surface will not be smooth. Therefore a method to smoother selected area of a curvature is implemented. One of the popular methods is adaptive subdivision method. Experiments are performed using two data with results based on processing time, rendering speed and the appearance of the object on the devices. The result shows a downfall in frame rate performance due to the increase in the number of triangles with each level of iteration while the processing time of generating the new mesh also significantly increase. Since there is a difference in screen size between the devices the surface on the iPhone appears to have more triangles and more compact than the surface displayed on the iPad. [Figure not available: see fulltext.

  12. Training Lp norm multiple kernel learning in the primal.

    PubMed

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  13. Parametric kernel-driven active contours for image segmentation

    NASA Astrophysics Data System (ADS)

    Wu, Qiongzhi; Fang, Jiangxiong

    2012-10-01

    We investigated a parametric kernel-driven active contour (PKAC) model, which implicitly transfers kernel mapping and piecewise constant to modeling the image data via kernel function. The proposed model consists of curve evolution functional with three terms: global kernel-driven and local kernel-driven terms, which evaluate the deviation of the mapped image data within each region from the piecewise constant model, and a regularization term expressed as the length of the evolution curves. In the local kernel-driven term, the proposed model can effectively segment images with intensity inhomogeneity by incorporating the local image information. By balancing the weight between the global kernel-driven term and the local kernel-driven term, the proposed model can segment the images with either intensity homogeneity or intensity inhomogeneity. To ensure the smoothness of the level set function and reduce the computational cost, the distance regularizing term is applied to penalize the deviation of the level set function and eliminate the requirement of re-initialization. Compared with the local image fitting model and local binary fitting model, experimental results show the advantages of the proposed method in terms of computational efficiency and accuracy.

  14. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  15. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  16. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  17. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, Joseph Thaddeus

    1998-01-01

    A new adaptive optics system and method of operation, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G'=(I-X(X.sup.T X).sup.-1 X.sup.T)G(I-A)

  18. Method for removing tilt control in adaptive optics systems

    DOEpatents

    Salmon, J.T.

    1998-04-28

    A new adaptive optics system and method of operation are disclosed, whereby the method removes tilt control, and includes the steps of using a steering mirror to steer a wavefront in the desired direction, for aiming an impinging aberrated light beam in the direction of a deformable mirror. The deformable mirror has its surface deformed selectively by means of a plurality of actuators, and compensates, at least partially, for existing aberrations in the light beam. The light beam is split into an output beam and a sample beam, and the sample beam is sampled using a wavefront sensor. The sampled signals are converted into corresponding electrical signals for driving a controller, which, in turn, drives the deformable mirror in a feedback loop in response to the sampled signals, for compensating for aberrations in the wavefront. To this purpose, a displacement error (gradient) of the wavefront is measured, and adjusted by a modified gain matrix, which satisfies the following equation: G{prime} = (I{minus}X(X{sup T} X){sup {minus}1}X{sup T})G(I{minus}A). 3 figs.

  19. Adapted G-mode Clustering Method applied to Asteroid Taxonomy

    NASA Astrophysics Data System (ADS)

    Hasselmann, Pedro H.; Carvano, Jorge M.; Lazzaro, D.

    2013-11-01

    The original G-mode was a clustering method developed by A. I. Gavrishin in the late 60's for geochemical classification of rocks, but was also applied to asteroid photometry, cosmic rays, lunar sample and planetary science spectroscopy data. In this work, we used an adapted version to classify the asteroid photometry from SDSS Moving Objects Catalog. The method works by identifying normal distributions in a multidimensional space of variables. The identification starts by locating a set of points with smallest mutual distance in the sample, which is a problem when data is not planar. Here we present a modified version of the G-mode algorithm, which was previously written in FORTRAN 77, in Python 2.7 and using NumPy, SciPy and Matplotlib packages. The NumPy was used for array and matrix manipulation and Matplotlib for plot control. The Scipy had a import role in speeding up G-mode, Scipy.spatial.distance.mahalanobis was chosen as distance estimator and Numpy.histogramdd was applied to find the initial seeds from which clusters are going to evolve. Scipy was also used to quickly produce dendrograms showing the distances among clusters. Finally, results for Asteroids Taxonomy and tests for different sample sizes and implementations are presented.

  20. A Self-Adaptive Projection and Contraction Method for Linear Complementarity Problems

    SciTech Connect

    Liao Lizhi Wang Shengli

    2003-10-15

    In this paper we develop a self-adaptive projection and contraction method for the linear complementarity problem (LCP). This method improves the practical performance of the modified projection and contraction method by adopting a self-adaptive technique. The global convergence of our new method is proved under mild assumptions. Our numerical tests clearly demonstrate the necessity and effectiveness of our proposed method.

  1. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.

  2. Adaptable Metadata Rich IO Methods for Portable High Performance IO

    SciTech Connect

    Lofstead, J.; Zheng, Fang; Klasky, Scott A; Schwan, Karsten

    2009-01-01

    Since IO performance on HPC machines strongly depends on machine characteristics and configuration, it is important to carefully tune IO libraries and make good use of appropriate library APIs. For instance, on current petascale machines, independent IO tends to outperform collective IO, in part due to bottlenecks at the metadata server. The problem is exacerbated by scaling issues, since each IO library scales differently on each machine, and typically, operates efficiently to different levels of scaling on different machines. With scientific codes being run on a variety of HPC resources, efficient code execution requires us to address three important issues: (1) end users should be able to select the most efficient IO methods for their codes, with minimal effort in terms of code updates or alterations; (2) such performance-driven choices should not prevent data from being stored in the desired file formats, since those are crucial for later data analysis; and (3) it is important to have efficient ways of identifying and selecting certain data for analysis, to help end users cope with the flood of data produced by high end codes. This paper employs ADIOS, the ADaptable IO System, as an IO API to address (1)-(3) above. Concerning (1), ADIOS makes it possible to independently select the IO methods being used by each grouping of data in an application, so that end users can use those IO methods that exhibit best performance based on both IO patterns and the underlying hardware. In this paper, we also use this facility of ADIOS to experimentally evaluate on petascale machines alternative methods for high performance IO. Specific examples studied include methods that use strong file consistency vs. delayed parallel data consistency, as that provided by MPI-IO or POSIX IO. Concerning (2), to avoid linking IO methods to specific file formats and attain high IO performance, ADIOS introduces an efficient intermediate file format, termed BP, which can be converted, at small

  3. Study of the Impact of Tissue Density Heterogeneities on 3-Dimensional Abdominal Dosimetry: Comparison Between Dose Kernel Convolution and Direct Monte Carlo Methods

    PubMed Central

    Dieudonné, Arnaud; Hobbs, Robert F.; Lebtahi, Rachida; Maurel, Fabien; Baechler, Sébastien; Wahl, Richard L.; Boubaker, Ariane; Le Guludec, Dominique; Sgouros, Georges; Gardin, Isabelle

    2014-01-01

    Dose kernel convolution (DK) methods have been proposed to speed up absorbed dose calculations in molecular radionuclide therapy. Our aim was to evaluate the impact of tissue density heterogeneities (TDH) on dosimetry when using a DK method and to propose a simple density-correction method. Methods This study has been conducted on 3 clinical cases: case 1, non-Hodgkin lymphoma treated with 131I-tositumomab; case 2, a neuroendocrine tumor treatment simulated with 177Lu-peptides; and case 3, hepatocellular carcinoma treated with 90Y-microspheres. Absorbed dose calculations were performed using a direct Monte Carlo approach accounting for TDH (3D-RD), and a DK approach (VoxelDose, or VD). For each individual voxel, the VD absorbed dose, DVD, calculated assuming uniform density, was corrected for density, giving DVDd. The average 3D-RD absorbed dose values, D3DRD, were compared with DVD and DVDd, using the relative difference ΔVD/3DRD. At the voxel level, density-binned ΔVD/3DRD and ΔVDd/3DRD were plotted against ρ and fitted with a linear regression. Results The DVD calculations showed a good agreement with D3DRD. ΔVD/3DRD was less than 3.5%, except for the tumor of case 1 (5.9%) and the renal cortex of case 2 (5.6%). At the voxel level, the ΔVD/3DRD range was 0%–14% for cases 1 and 2, and −3% to 7% for case 3. All 3 cases showed a linear relationship between voxel bin-averaged ΔVD/3DRD and density, ρ: case 1 (Δ = −0.56ρ + 0.62, R2 = 0.93), case 2 (Δ = −0.91ρ + 0.96, R2 = 0.99), and case 3 (Δ = −0.69ρ + 0.72, R2 = 0.91). The density correction improved the agreement of the DK method with the Monte Carlo approach (ΔVDd/3DRD < 1.1%), but with a lesser extent for the tumor of case 1 (3.1%). At the voxel level, the ΔVDd/3DRD range decreased for the 3 clinical cases (case 1, −1% to 4%; case 2, −0.5% to 1.5%, and −1.5% to 2%). No more linear regression existed for cases 2 and 3, contrary to case 1 (Δ = 0.41ρ − 0.38, R2 = 0.88) although

  4. Convolution kernels for multi-wavelength imaging

    NASA Astrophysics Data System (ADS)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  5. Principles and Methods of Adapted Physical Education and Recreation.

    ERIC Educational Resources Information Center

    Arnheim, Daniel D.; And Others

    This text is designed for the elementary and secondary school physical educator and the recreation specialist in adapted physical education and, more specifically, as a text for college courses in adapted and corrective physical education and therapeutic recreation. The text is divided into four major divisions: scope, key teaching and therapy…

  6. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  7. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  8. A hybrid method for optimization of the adaptive Goldstein filter

    NASA Astrophysics Data System (ADS)

    Jiang, Mi; Ding, Xiaoli; Tian, Xin; Malhotra, Rakesh; Kong, Weixue

    2014-12-01

    The Goldstein filter is a well-known filter for interferometric filtering in the frequency domain. The main parameter of this filter, alpha, is set as a power of the filtering function. Depending on it, considered areas are strongly or weakly filtered. Several variants have been developed to adaptively determine alpha using different indicators such as the coherence, and phase standard deviation. The common objective of these methods is to prevent areas with low noise from being over filtered while simultaneously allowing stronger filtering over areas with high noise. However, the estimators of these indicators are biased in the real world and the optimal model to accurately determine the functional relationship between the indicators and alpha is also not clear. As a result, the filter always under- or over-filters and is rarely correct. The study presented in this paper aims to achieve accurate alpha estimation by correcting the biased estimator using homogeneous pixel selection and bootstrapping algorithms, and by developing an optimal nonlinear model to determine alpha. In addition, an iteration is also merged into the filtering procedure to suppress the high noise over incoherent areas. The experimental results from synthetic and real data show that the new filter works well under a variety of conditions and offers better and more reliable performance when compared to existing approaches.

  9. Spectrum-based kernel length estimation for Gaussian process classification.

    PubMed

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  10. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    PubMed

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  11. Bayesian Kernel Mixtures for Counts.

    PubMed

    Canale, Antonio; Dunson, David B

    2011-12-01

    Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.

  12. MULTIVARIATE KERNEL PARTITION PROCESS MIXTURES

    PubMed Central

    Dunson, David B.

    2013-01-01

    Mixtures provide a useful approach for relaxing parametric assumptions. Discrete mixture models induce clusters, typically with the same cluster allocation for each parameter in multivariate cases. As a more flexible approach that facilitates sparse nonparametric modeling of multivariate random effects distributions, this article proposes a kernel partition process (KPP) in which the cluster allocation varies for different parameters. The KPP is shown to be the driving measure for a multivariate ordered Chinese restaurant process that induces a highly-flexible dependence structure in local clustering. This structure allows the relative locations of the random effects to inform the clustering process, with spatially-proximal random effects likely to be assigned the same cluster index. An exact block Gibbs sampler is developed for posterior computation, avoiding truncation of the infinite measure. The methods are applied to hormone curve data, and a dependent KPP is proposed for classification from functional predictors. PMID:24478563

  13. Prediction of kernel density of corn using single-kernel near infrared spectroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Corn hardness as is an important property for dry and wet-millers, food processors and corn breeders developing hybrids for specific markets. Of the several methods used to measure hardness, kernel density measurements are one of the more repeatable methods to quantify hardness. Near infrared spec...

  14. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    SciTech Connect

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-29

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  15. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    PubMed

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  16. [Novel method of noise power spectrum measurement for computed tomography images with adaptive iterative reconstruction method].

    PubMed

    Nishimaru, Eiji; Ichikawa, Katsuhiro; Hara, Takanori; Terakawa, Shoichi; Yokomachi, Kazushi; Fujioka, Chikako; Kiguchi, Masao; Ishifuro, Minoru

    2012-01-01

    Adaptive iterative reconstruction techniques (IRs) can decrease image noise in computed tomography (CT) and are expected to contribute to reduction of the radiation dose. To evaluate the performance of IRs, the conventional two-dimensional (2D) noise power spectrum (NPS) is widely used. However, when an IR provides an NPS value drop at all spatial frequency (which is similar to NPS changes by dose increase), the conventional method cannot evaluate the correct noise property because the conventional method does not correspond to the volume data natures of CT images. The purpose of our study was to develop a new method for NPS measurements that can be adapted to IRs. Our method utilized thick multi-planar reconstruction (MPR) images. The thick images are generally made by averaging CT volume data in a direction perpendicular to a MPR plane (e.g. z-direction for axial MPR plane). By using this averaging technique as a cutter for 3D-NPS, we can obtain adequate 2D-extracted NPS (eNPS) from 3D NPS. We applied this method to IR images generated with adaptive iterative dose reduction 3D (AIDR-3D, Toshiba) to investigate the validity of our method. A water phantom with 24 cm-diameters was scanned at 120 kV and 200 mAs with a 320-row CT (Acquilion One, Toshiba). From the results of study, the adequate thickness of MPR images for eNPS was more than 25.0 mm. Our new NPS measurement method utilizing thick MPR images was accurate and effective for evaluating noise reduction effects of IRs.

  17. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  18. Determining the Parameters of the Hereditary Kernels of Nonlinear Viscoelastic Isotropic Materials in Torsion

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Ragulina, V. S.; Fernati, P. V.

    2015-03-01

    A method for determining the parameters of the hereditary kernels for nonlinear viscoelastic materials is tested in conditions of pure torsion. A Rabotnov-type model is chosen. The parameters of the hereditary kernels are determined by fitting discrete values of the kernels found using a similarity condition. The discrete values of the kernels in the zone of singularity occurring in short-term tests are found using weight functions. The Abel kernel, a combination of power and exponential functions, and a fractional-exponential function are considered

  19. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  20. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  1. Adaptive L₁/₂ shooting regularization method for survival analysis using gene expression data.

    PubMed

    Liu, Xiao-Ying; Liang, Yong; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2013-01-01

    A new adaptive L₁/₂ shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L₁/₂ shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L₁ penalties and a shooting strategy of L₁/₂ penalty. Simulation results based on high dimensional artificial data show that the adaptive L₁/₂ shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L₁/₂ regularization method performs competitively.

  2. An O(N) and parallel approach to integral problems by a kernel-independent fast multipole method: Application to polarization and magnetization of interacting particles

    SciTech Connect

    Jiang, Xikai; Li, Jiyuan; Zhao, Xujun; Qin, Jian; Karpeev, Dmitry; Hernandez-Ortiz, Juan; de Pablo, Juan J.; Heinonen, Olle

    2016-08-10

    Large classes of materials systems in physics and engineering are governed by magnetic and electrostatic interactions. Continuum or mesoscale descriptions of such systems can be cast in terms of integral equations, whose direct computational evaluation requires O(N2) operations, where N is the number of unknowns. Such a scaling, which arises from the many-body nature of the relevant Green's function, has precluded wide-spread adoption of integral methods for solution of large-scale scientific and engineering problems. In this work, a parallel computational approach is presented that relies on using scalable open source libraries and utilizes a kernel-independent Fast Multipole Method (FMM) to evaluate the integrals in O(N) operations, with O(N) memory cost, thereby substantially improving the scalability and efficiency of computational integral methods. We demonstrate the accuracy, efficiency, and scalability of our approach in the context of two examples. In the first, we solve a boundary value problem for a ferroelectric/ferromagnetic volume in free space. In the second, we solve an electrostatic problem involving polarizable dielectric bodies in an unbounded dielectric medium. Lastly, the results from these test cases show that our proposed parallel approach, which is built on a kernel-independent FMM, can enable highly efficient and accurate simulations and allow for considerable flexibility in a broad range of applications.

  3. Adaptation of a-Stratified Method in Variable Length Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai

    Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…

  4. Systems and Methods for Derivative-Free Adaptive Control

    NASA Technical Reports Server (NTRS)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  5. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  6. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  7. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  8. Inner string cementing adapter and method of use

    SciTech Connect

    Helms, L.C.

    1991-08-20

    This patent describes an inner string cementing adapter for use on a work string in a well casing having floating equipment therein. It comprises mandrel means for connecting to a lower end of the work string; and sealing means adjacent to the mandrel means for substantially flatly sealing against a surface of the floating equipment without engaging a central opening in the floating equipment.

  9. An adaptive precision gradient method for optimal control.

    NASA Technical Reports Server (NTRS)

    Klessig, R.; Polak, E.

    1973-01-01

    This paper presents a gradient algorithm for unconstrained optimal control problems. The algorithm is stated in terms of numerical integration formulas, the precision of which is controlled adaptively by a test that ensures convergence. Empirical results show that this algorithm is considerably faster than its fixed precision counterpart.-

  10. A New Method to Cancel RFI---The Adaptive Filter

    NASA Astrophysics Data System (ADS)

    Bradley, R.; Barnbaum, C.

    1996-12-01

    An increasing amount of precious radio frequency spectrum in the VHF, UHF, and microwave bands is being utilized each year to support new commercial and military ventures, and all have the potential to interfere with radio astronomy observations. Some radio spectral lines of astronomical interest occur outside the protected radio astronomy bands and are unobservable due to heavy interference. Conventional approaches to deal with RFI include legislation, notch filters, RF shielding, and post-processing techniques. Although these techniques are somewhat successful, each suffers from insufficient interference cancellation. One concept of interference excision that has not been used before in radio astronomy is adaptive interference cancellation. The concept of adaptive interference canceling was first introduced in the mid-1970s as a way to reduce unwanted noise in low frequency (audio) systems. Examples of such systems include the canceling of maternal ECG in fetal electrocardiography and the reduction of engine noise in the passenger compartment of automobiles. Only recently have high-speed digital filter chips made adaptive filtering possible in a bandwidth as large a few megahertz, finally opening the door to astronomical uses. The system consists of two receivers: the main beam of the radio telescope receives the desired signal corrupted by RFI coming in the sidelobes, and the reference antenna receives only the RFI. The reference antenna is processed using a digital adaptive filter and then subtracted from the signal in the main beam, thus producing the system output. The weights of the digital filter are adjusted by way of an algorithm that minimizes, in a least-squares sense, the power output of the system. Through an adaptive-iterative process, the interference canceler will lock onto the RFI and the filter will adjust itself to minimize the effect of the RFI at the system output. We are building a prototype 100 MHz receiver and will measure the cancellation

  11. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  12. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  13. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  14. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle...

  15. Adaptive finite element methods for two-dimensional problems in computational fracture mechanics

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Bass, J. M.; Spradley, L. W.

    1994-01-01

    Some recent results obtained using solution-adaptive finite element methods in two-dimensional problems in linear elastic fracture mechanics are presented. The focus is on the basic issue of adaptive finite element methods for validating the new methodology by computing demonstration problems and comparing the stress intensity factors to analytical results.

  16. Scale Space Graph Representation and Kernel Matching for Non Rigid and Textured 3D Shape Retrieval.

    PubMed

    Garro, Valeria; Giachetti, Andrea

    2016-06-01

    In this paper we introduce a novel framework for 3D object retrieval that relies on tree-based shape representations (TreeSha) derived from the analysis of the scale-space of the Auto Diffusion Function (ADF) and on specialized graph kernels designed for their comparison. By coupling maxima of the Auto Diffusion Function with the related basins of attraction, we can link the information at different scales encoding spatial relationships in a graph description that is isometry invariant and can easily incorporate texture and additional geometrical information as node and edge features. Using custom graph kernels it is then possible to estimate shape dissimilarities adapted to different specific tasks and on different categories of models, making the procedure a powerful and flexible tool for shape recognition and retrieval. Experimental results demonstrate that the method can provide retrieval scores similar or better than state-of-the-art on textured and non textured shape retrieval benchmarks and give interesting insights on effectiveness of different shape descriptors and graph kernels.

  17. A Gabor-block-based kernel discriminative common vector approach using cosine kernels for human face recognition.

    PubMed

    Kar, Arindam; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2012-01-01

    In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach.

  18. Nonlinear stochastic system identification of skin using volterra kernels.

    PubMed

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  19. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  20. Method and apparatus for adaptive force and position control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun (Inventor)

    1989-01-01

    The present invention discloses systematic methods and apparatus for the design of real time controllers. Real-time control employs adaptive force/position by use of feedforward and feedback controllers, with the feedforward controller being the inverse of the linearized model of robot dynamics and containing only proportional-double-derivative terms is disclosed. The feedback controller, of the proportional-integral-derivative type, ensures that manipulator joints follow reference trajectories and the feedback controller achieves robust tracking of step-plus-exponential trajectories, all in real time. The adaptive controller includes adaptive force and position control within a hybrid control architecture. The adaptive controller, for force control, achieves tracking of desired force setpoints, and the adaptive position controller accomplishes tracking of desired position trajectories. Circuits in the adaptive feedback and feedforward controllers are varied by adaptation laws.

  1. A new adaptive time step method for unsteady flow simulations in a human lung.

    PubMed

    Fenández-Tena, Ana; Marcos, Alfonso C; Martínez, Cristina; Keith Walters, D

    2017-04-07

    The innovation presented is a method for adaptive time-stepping that allows clustering of time steps in portions of the cycle for which flow variables are rapidly changing, based on the concept of using a uniform step in a relevant dependent variable rather than a uniform step in the independent variable time. A user-defined function was developed to adapt the magnitude of the time step (adaptive time step) to a defined rate of change in inlet velocity. Quantitative comparison indicates that the new adaptive time stepping method significantly improves accuracy for simulations using an equivalent number of time steps per cycle.

  2. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  3. Semisupervised Kernel Marginal Fisher Analysis for Face Recognition

    PubMed Central

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm. PMID:24163638

  4. The adapted augmented Lagrangian method: a new method for the resolution of the mechanical frictional contact problem

    NASA Astrophysics Data System (ADS)

    Bussetta, Philippe; Marceau, Daniel; Ponthot, Jean-Philippe

    2012-02-01

    The aim of this work is to propose a new numerical method for solving the mechanical frictional contact problem in the general case of multi-bodies in a three dimensional space. This method is called adapted augmented Lagrangian method (AALM) and can be used in a multi-physical context (like thermo-electro-mechanical fields problems). This paper presents this new method and its advantages over other classical methods such as penalty method (PM), adapted penalty method (APM) and, augmented Lagrangian method (ALM). In addition, the efficiency and the reliability of the AALM are proved with some academic problems and an industrial thermo-electromechanical problem.

  5. Nonlinear hyperspectral unmixing based on constrained multiple kernel NMF

    NASA Astrophysics Data System (ADS)

    Cui, Jiantao; Li, Xiaorun; Zhao, Liaoying

    2014-05-01

    Nonlinear spectral unmixing constitutes an important field of research for hyperspectral imagery. An unsupervised nonlinear spectral unmixing algorithm, namely multiple kernel constrained nonnegative matrix factorization (MKCNMF) is proposed by coupling multiple-kernel selection with kernel NMF. Additionally, a minimum endmemberwise distance constraint and an abundance smoothness constraint are introduced to alleviate the uniqueness problem of NMF in the algorithm. In the MKCNMF, two problems of optimizing matrices and selecting the proper kernel are jointly solved. The performance of the proposed unmixing algorithm is evaluated via experiments based on synthetic and real hyperspectral data sets. The experimental results demonstrate that the proposed method outperforms some existing unmixing algorithms in terms of spectral angle distance (SAD) and abundance fractions.

  6. Balancing continuous covariates based on Kernel densities.

    PubMed

    Ma, Zhenjun; Hu, Feifang

    2013-03-01

    The balance of important baseline covariates is essential for convincing treatment comparisons. Stratified permuted block design and minimization are the two most commonly used balancing strategies, both of which require the covariates to be discrete. Continuous covariates are typically discretized in order to be included in the randomization scheme. But breakdown of continuous covariates into subcategories often changes the nature of the covariates and makes distributional balance unattainable. In this article, we propose to balance continuous covariates based on Kernel density estimations, which keeps the continuity of the covariates. Simulation studies show that the proposed Kernel-Minimization can achieve distributional balance of both continuous and categorical covariates, while also keeping the group size well balanced. It is also shown that the Kernel-Minimization is less predictable than stratified permuted block design and minimization. Finally, we apply the proposed method to redesign the NINDS trial, which has been a source of controversy due to imbalance of continuous baseline covariates. Simulation shows that imbalances such as those observed in the NINDS trial can be generally avoided through the implementation of the new method.

  7. Kernel-based machine learning techniques for infrasound signal classification

    NASA Astrophysics Data System (ADS)

    Tuma, Matthias; Igel, Christian; Mialle, Pierrick

    2014-05-01

    Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All

  8. Surface estimation methods with phased-arrays for adaptive ultrasonic imaging in complex components

    NASA Astrophysics Data System (ADS)

    Robert, S.; Calmon, P.; Calvo, M.; Le Jeune, L.; Iakovleva, E.

    2015-03-01

    Immersion ultrasonic testing of structures with complex geometries may be significantly improved by using phased-arrays and specific adaptive algorithms that allow to image flaws under a complex and unknown interface. In this context, this paper presents a comparative study of different Surface Estimation Methods (SEM) available in the CIVA software and used for adaptive imaging. These methods are based either on time-of-flight measurements or on image processing. We also introduce a generalized adaptive method where flaws may be fully imaged with half-skip modes. In this method, both the surface and the back-wall of a complex structure are estimated before imaging flaws.

  9. Spatial-light-modulator-based adaptive optical system for the use of multiple phase retrieval methods.

    PubMed

    Lingel, Christian; Haist, Tobias; Osten, Wolfgang

    2016-12-20

    We propose an adaptive optical setup using a spatial light modulator (SLM), which is suitable to perform different phase retrieval methods with varying optical features and without mechanical movement. By this approach, it is possible to test many different phase retrieval methods and their parameters (optical and algorithmic) using one stable setup and without hardware adaption. We show exemplary results for the well-known transport of intensity equation (TIE) method and a new iterative adaptive phase retrieval method, where the object phase is canceled by an inverse phase written into part of the SLM. The measurement results are compared to white light interferometric measurements.

  10. Kernel component analysis using an epsilon-insensitive robust loss function.

    PubMed

    Alzate, Carlos; Suykens, Johan A K

    2008-09-01

    Kernel principal component analysis (PCA) is a technique to perform feature extraction in a high-dimensional feature space, which is nonlinearly related to the original input space. The kernel PCA formulation corresponds to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalues correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA, we extend it to a generalized form of kernel component analysis (KCA) with a general underlying loss function made explicit. For classical kernel PCA, the underlying loss function is L(2) . In this generalized form, one can plug in also other loss functions. In the context of robust statistics, it is known that the L(2) loss function is not robust because its influence function is not bounded. Therefore, outliers can skew the solution from the desired one. Another issue with kernel PCA is the lack of sparseness: the principal components are dense expansions in terms of kernel functions. In this paper, we introduce robustness and sparseness into kernel component analysis by using an epsilon-insensitive robust loss function. We propose two different algorithms. The first method solves a set of nonlinear equations with kernel PCA as starting points. The second method uses a simplified iterative weighting procedure that leads to solving a sequence of generalized eigenvalue problems. Simulations with toy and real-life data show improvements in terms of robustness together with a sparse representation.

  11. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  12. Nonlinear mode decomposition: a noise-robust, adaptive decomposition method.

    PubMed

    Iatsenko, Dmytro; McClintock, Peter V E; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool-nonlinear mode decomposition (NMD)-which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques-which, together with the adaptive choice of their parameters, make it extremely noise robust-and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  13. Comparing classification methods for longitudinal fMRI studies.

    PubMed

    Schmah, Tanya; Yourganov, Grigori; Zemel, Richard S; Hinton, Geoffrey E; Small, Steven L; Strother, Stephen C

    2010-11-01

    We compare 10 methods of classifying fMRI volumes by applying them to data from a longitudinal study of stroke recovery: adaptive Fisher's linear and quadratic discriminant; gaussian naive Bayes; support vector machines with linear, quadratic, and radial basis function (RBF) kernels; logistic regression; two novel methods based on pairs of restricted Boltzmann machines (RBM); and K-nearest neighbors. All methods were tested on three binary classification tasks, and their out-of-sample classification accuracies are compared. The relative performance of the methods varies considerably across subjects and classification tasks. The best overall performers were adaptive quadratic discriminant, support vector machines with RBF kernels, and generatively trained pairs of RBMs.

  14. Investigating Item Exposure Control Methods in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Ozturk, Nagihan Boztunc; Dogan, Nuri

    2015-01-01

    This study aims to investigate the effects of item exposure control methods on measurement precision and on test security under various item selection methods and item pool characteristics. In this study, the Randomesque (with item group sizes of 5 and 10), Sympson-Hetter, and Fade-Away methods were used as item exposure control methods. Moreover,…

  15. An examination of an adapter method for measuring the vibration transmitted to the human arms.

    PubMed

    Xu, Xueyan S; Dong, Ren G; Welcome, Daniel E; Warren, Christopher; McDowell, Thomas W

    2015-09-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system.

  16. An examination of an adapter method for measuring the vibration transmitted to the human arms

    PubMed Central

    Xu, Xueyan S.; Dong, Ren G.; Welcome, Daniel E.; Warren, Christopher; McDowell, Thomas W.

    2016-01-01

    The objective of this study is to evaluate an adapter method for measuring the vibration on the human arms. Four instrumented adapters with different weights were used to measure the vibration transmitted to the wrist, forearm, and upper arm of each subject. Each adapter was attached at each location on the subjects using an elastic cloth wrap. Two laser vibrometers were also used to measure the transmitted vibration at each location to evaluate the validity of the adapter method. The apparent mass at the palm of the hand along the forearm direction was also measured to enhance the evaluation. This study found that the adapter and laser-measured transmissibility spectra were comparable with some systematic differences. While increasing the adapter mass reduced the resonant frequency at the measurement location, increasing the tightness of the adapter attachment increased the resonant frequency. However, the use of lightweight (≤15 g) adapters under medium attachment tightness did not change the basic trends of the transmissibility spectrum. The resonant features observed in the transmissibility spectra were also correlated with those observed in the apparent mass spectra. Because the local coordinate systems of the adapters may be significantly misaligned relative to the global coordinates of the vibration test systems, large errors were observed for the adapter-measured transmissibility in some individual orthogonal directions. This study, however, also demonstrated that the misalignment issue can be resolved by either using the total vibration transmissibility or by measuring the misalignment angles to correct the errors. Therefore, the adapter method is acceptable for understanding the basic characteristics of the vibration transmission in the human arms, and the adapter-measured data are acceptable for approximately modeling the system. PMID:26834309

  17. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  18. Resummed memory kernels in generalized system-bath master equations.

    PubMed

    Mavros, Michael G; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  19. Resummed memory kernels in generalized system-bath master equations

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the "Landau-Zener resummation" of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  20. Resummed memory kernels in generalized system-bath master equations

    SciTech Connect

    Mavros, Michael G.; Van Voorhis, Troy

    2014-08-07

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics.

  1. Multiple Kernel Learning for Visual Object Recognition: A Review.

    PubMed

    Bucak, Serhat S; Rong Jin; Jain, Anil K

    2014-07-01

    Multiple kernel learning (MKL) is a principled approach for selecting and combining kernels for a given recognition task. A number of studies have shown that MKL is a useful tool for object recognition, where each image is represented by multiple sets of features and MKL is applied to combine different feature sets. We review the state-of-the-art for MKL, including different formulations and algorithms for solving the related optimization problems, with the focus on their applications to object recognition. One dilemma faced by practitioners interested in using MKL for object recognition is that different studies often provide conflicting results about the effectiveness and efficiency of MKL. To resolve this, we conduct extensive experiments on standard datasets to evaluate various approaches to MKL for object recognition. We argue that the seemingly contradictory conclusions offered by studies are due to different experimental setups. The conclusions of our study are: (i) given a sufficient number of training examples and feature/kernel types, MKL is more effective for object recognition than simple kernel combination (e.g., choosing the best performing kernel or average of kernels); and (ii) among the various approaches proposed for MKL, the sequential minimal optimization, semi-infinite programming, and level method based ones are computationally most efficient.

  2. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  3. Coupled kernel embedding for low resolution face image recognition.

    PubMed

    Ren, Chuan-Xian; Dai, Dao-Qing; Yan, Hong

    2012-08-01

    Practical video scene and face recognition systems are sometimes confronted with low-resolution (LR) images. The faces may be very small even if the video is clear, thus it is difficult to directly measure the similarity between the faces and the high-resolution (HR) training samples. Traditional super-resolution (SR) methods based face recognition usually have limited performance because the target of SR may not be consistent with that of classification, and time-consuming SR algorithms are not suitable for real-time applications. In this paper, a new feature extraction method called Coupled Kernel Embedding (CKE) is proposed for LR face recognition without any SR preprocessing. In this method, the final kernel matrix is constructed by concatenating two individual kernel matrices in the diagonal direction, and the (semi-)positively definite properties are preserved for optimization. CKE addresses the problem of comparing multi-modal data that are difficult for conventional methods in practice due to the lack of an efficient similarity measure. Particularly, different kernel types (e.g., linear, Gaussian, polynomial) can be integrated into an uniformed optimization objective, which cannot be achieved by simple linear methods. CKE solves this problem by minimizing the dissimilarities captured by their kernel Gram matrices in the low- and high-resolution spaces. In the implementation, the nonlinear objective function is minimized by a generalized eigenvalue decomposition. Experiments on benchmark and real databases show that our CKE method indeed improves the recognition performance.

  4. Classification of corn kernels contaminated with aflatoxins using fluorescence and reflectance hyperspectral images analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Fengle; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert; Bhatnagar, Deepak; Cleveland, Thomas

    2015-05-01

    Aflatoxins are secondary metabolites produced by certain fungal species of the Aspergillus genus. Aflatoxin contamination remains a problem in agricultural products due to its toxic and carcinogenic properties. Conventional chemical methods for aflatoxin detection are time-consuming and destructive. This study employed fluorescence and reflectance visible near-infrared (VNIR) hyperspectral images to classify aflatoxin contaminated corn kernels rapidly and non-destructively. Corn ears were artificially inoculated in the field with toxigenic A. flavus spores at the early dough stage of kernel development. After harvest, a total of 300 kernels were collected from the inoculated ears. Fluorescence hyperspectral imagery with UV excitation and reflectance hyperspectral imagery with halogen illumination were acquired on both endosperm and germ sides of kernels. All kernels were then subjected to chemical analysis individually to determine aflatoxin concentrations. A region of interest (ROI) was created for each kernel to extract averaged spectra. Compared with healthy kernels, fluorescence spectral peaks for contaminated kernels shifted to longer wavelengths with lower intensity, and reflectance values for contaminated kernels were lower with a different spectral shape in 700-800 nm region. Principal component analysis was applied for data compression before classifying kernels into contaminated and healthy based on a 20 ppb threshold utilizing the K-nearest neighbors algorithm. The best overall accuracy achieved was 92.67% for germ side in the fluorescence data analysis. The germ side generally performed better than endosperm side. Fluorescence and reflectance image data achieved similar accuracy.

  5. General-form 3-3-3 interpolation kernel and its simplified frequency-response derivation

    NASA Astrophysics Data System (ADS)

    Deng, Tian-Bo

    2016-11-01

    An interpolation kernel is required in a wide variety of signal processing applications such as image interpolation and timing adjustment in digital communications. This article presents a general-form interpolation kernel called 3-3-3 interpolation kernel and derives its frequency response in a closed-form by using a simple derivation method. This closed-form formula is preliminary to designing various 3-3-3 interpolation kernels subject to a set of design constraints. The 3-3-3 interpolation kernel is formed through utilising the third-degree piecewise polynomials, and it is an even-symmetric function. Thus, it will suffice to consider only its right-hand side when deriving its frequency response. Since the right-hand side of the interpolation kernel contains three piecewise polynomials of the third degree, i.e. the degrees of the three piecewise polynomials are (3,3,3), we call it the 3-3-3 interpolation kernel. Once the general-form frequency-response formula is derived, we can systematically formulate the design of various 3-3-3 interpolation kernels subject to a set of design constraints, which are targeted for different interpolation applications. Therefore, the closed-form frequency-response expression is preliminary to the optimal design of various 3-3-3 interpolation kernels. We will use an example to show the optimal design of a 3-3-3 interpolation kernel based on the closed-form frequency-response expression.

  6. Determining the parameters of the fractional exponential heredity kernels of linear viscoelastic materials

    NASA Astrophysics Data System (ADS)

    Golub, V. P.; Fernati, P. V.; Lyashenko, Ya. G.

    2008-09-01

    The parameters of the fractional exponential creep and relaxation kernels of linear viscoelastic materials are determined. Methods that approximate the kernel by using the Mittag-Leffler function, the Laplace-Carson transform, and direct approximation of the creep function by the original equation are analyzed. The parameters of fractional exponential kernels are determined for aramid fibers, parapolyamide fibers, glass-reinforced plastic, and polymer concrete. It is shown that the kernel parameters calculated through the direct approximation of the creep function provide the best agreement between theory and experiment. The methods are experimentally validated for constant-stress and variable-stress loading in the modes of additional loading and complete unloading

  7. Minimum classification error-based weighted support vector machine kernels for speaker verification.

    PubMed

    Suh, Youngjoo; Kim, Hoirin

    2013-04-01

    Support vector machines (SVMs) have been proved to be an effective approach to speaker verification. An appropriate selection of the kernel function is a key issue in SVM-based classification. In this letter, a new SVM-based speaker verification method utilizing weighted kernels in the Gaussian mixture model supervector space is proposed. The weighted kernels are derived by using the discriminative training approach, which minimizes speaker verification errors. Experiments performed on the NIST 2008 speaker recognition evaluation task showed that the proposed approach provides substantially improved performance over the baseline kernel-based method.

  8. Identification of Damaged Wheat Kernels and Cracked-Shell Hazelnuts with Impact Acoustics Time-Frequency Patterns

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A new adaptive time-frequency (t-f) analysis and classification procedure is applied to impact acoustic signals for detecting hazelnuts with cracked shells and three types of damaged wheat kernels. Kernels were dropped onto a steel plate, and the resulting impact acoustic signals were recorded with ...

  9. Adaptive classifier for steel strip surface defects

    NASA Astrophysics Data System (ADS)

    Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li

    2017-01-01

    Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.

  10. A new and efficient method to obtain benzalkonium chloride adapted cells of Listeria monocytogenes.

    PubMed

    Saá Ibusquiza, Paula; Herrera, Juan J R; Vázquez-Sánchez, Daniel; Parada, Adelaida; Cabo, Marta L

    2012-10-01

    A new method to obtain benzalkonium chloride (BAC) adapted L. monocytogenes cells was developed. A factorial design was used to assess the effects of the inoculum size and BAC concentration on the adaptation (measured in terms of lethal dose 50 -LD50-) of 6 strains of Listeria monocytogenes after only one exposure. The proposed method could be applied successfully in the L. monocytogenes strains with higher adaptive capacity to BAC. In those cases, a significant empirical equation was obtained showing a positive effect of the inoculum size and a positive interaction between the effects of BAC and inoculum size on the level of adaptation achieved. However, a slight negative effect of BAC, due to the biocide, was also significant. The proposed method improves the classical method based on successive stationary phase cultures in sublethal BAC concentrations because it is less time-consuming and more effective. For the laboratory strain L. monocytogenes 5873, by applying the new procedure it was possible to increase BAC-adaptation 3.69-fold in only 33 h, whereas using the classical procedure 2.61-fold of increase was reached after 5 days. Moreover, with the new method, the maximum level of adaptation was determined for all the strains reaching surprisingly almost the same concentration of BAC (mg/l) for 5 out 6 strains. Thus, a good reference for establishing the effective concentrations of biocides to ensure the maximum level of adaptation was also determined.

  11. Analysis of modified SMI method for adaptive array weight control

    NASA Technical Reports Server (NTRS)

    Dilsavor, R. L.; Moses, R. L.

    1989-01-01

    An adaptive array is applied to the problem of receiving a desired signal in the presence of weak interference signals which need to be suppressed. A modification, suggested by Gupta, of the sample matrix inversion (SMI) algorithm controls the array weights. In the modified SMI algorithm, interference suppression is increased by subtracting a fraction F of the noise power from the diagonal elements of the estimated covariance matrix. Given the true covariance matrix and the desired signal direction, the modified algorithm is shown to maximize a well-defined, intuitive output power ratio criterion. Expressions are derived for the expected value and variance of the array weights and output powers as a function of the fraction F and the number of snapshots used in the covariance matrix estimate. These expressions are compared with computer simulation and good agreement is found. A trade-off is found to exist between the desired level of interference suppression and the number of snapshots required in order to achieve that level with some certainty. The removal of noise eigenvectors from the covariance matrix inverse is also discussed with respect to this application. Finally, the type and severity of errors which occur in the covariance matrix estimate are characterized through simulation.

  12. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  13. Mixed Methods in Intervention Research: Theory to Adaptation

    ERIC Educational Resources Information Center

    Nastasi, Bonnie K.; Hitchcock, John; Sarkar, Sreeroopa; Burkholder, Gary; Varjas, Kristen; Jayasena, Asoka

    2007-01-01

    The purpose of this article is to demonstrate the application of mixed methods research designs to multiyear programmatic research and development projects whose goals include integration of cultural specificity when generating or translating evidence-based practices. The authors propose a set of five mixed methods designs related to different…

  14. Adaptive Discontinuous Evolution Galerkin Method for Dry Atmospheric Flow

    DTIC Science & Technology

    2013-04-02

    standard one-dimensional approximate Riemann solver used for the flux integration demonstrate better stability, accuracy as well as reliability of the...discontinuous evolution Galerkin method for dry atmospheric convection. Comparisons with the standard one-dimensional approximate Riemann solver used...instead of a standard one- dimensional approximate Riemann solver, the flux integration within the discontinuous Galerkin method is now realized by

  15. RTOS kernel in portable electrocardiograph

    NASA Astrophysics Data System (ADS)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  16. Speckle reduction in optical coherence tomography by adaptive total variation method

    NASA Astrophysics Data System (ADS)

    Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun

    2015-12-01

    An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.

  17. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  18. Antioxidant and antimicrobial activities of bitter and sweet apricot (Prunus armeniaca L.) kernels.

    PubMed

    Yiğit, D; Yiğit, N; Mavi, A

    2009-04-01

    The present study describes the in vitro antimicrobial and antioxidant activity of methanol and water extracts of sweet and bitter apricot (Prunus armeniaca L.) kernels. The antioxidant properties of apricot kernels were evaluated by determining radical scavenging power, lipid peroxidation inhibition activity and total phenol content measured with a DPPH test, the thiocyanate method and the Folin method, respectively. In contrast to extracts of the bitter kernels, both the water and methanol extracts of sweet kernels have antioxidant potential. The highest percent inhibition of lipid peroxidation (69%) and total phenolic content (7.9 +/- 0.2 microg/mL) were detected in the methanol extract of sweet kernels (Hasanbey) and in the water extract of the same cultivar, respectively. The antimicrobial activities of the above extracts were also tested against human pathogenic microorganisms using a disc-diffusion method, and the minimal inhibitory concentration (MIC) values of each active extract were determined. The most effective antibacterial activity was observed in the methanol and water extracts of bitter kernels and in the methanol extract of sweet kernels against the Gram-positive bacteria Staphylococcus aureus. Additionally, the methanol extracts of the bitter kernels were very potent against the Gram-negative bacteria Escherichia coli (0.312 mg/mL MIC value). Significant anti-candida activity was also observed with the methanol extract of bitter apricot kernels against Candida albicans, consisting of a 14 mm in diameter of inhibition zone and a 0.625 mg/mL MIC value.

  19. MONITORING METHODS ADAPTABLE TO VAPOR INTRUSION MONITORING - USEPA COMPENDIUM METHODS TO-15, TO-15 SUPPLEMENT (DRAFT), AND TO-17

    EPA Science Inventory

    USEPA ambient air monitoring methods for volatile organic compounds (VOCs) using specially-prepared canisters and solid adsorbents are directly adaptable to monitoring for vapors in the indoor environment. The draft Method TO-15 Supplement, an extension of the USEPA Method TO-15,...

  20. Adapting Western research methods to indigenous ways of knowing.

    PubMed

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  1. Automatic multirate methods for ordinary differential equations. [Adaptive time steps

    SciTech Connect

    Gear, C.W.

    1980-01-01

    A study is made of the application of integration methods in which different step sizes are used for different members of a system of equations. Such methods can result in savings if the cost of derivative evaluation is high or if a system is sparse; however, the estimation and control of errors is very difficult and can lead to high overheads. Three approaches are discussed, and it is shown that the least intuitive is the most promising. 2 figures.

  2. Systems and Methods for Parameter Dependent Riccati Equation Approaches to Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kim, Kilsoo (Inventor); Yucelen, Tansel (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    Systems and methods for adaptive control are disclosed. The systems and methods can control uncertain dynamic systems. The control system can comprise a controller that employs a parameter dependent Riccati equation. The controller can produce a response that causes the state of the system to remain bounded. The control system can control both minimum phase and non-minimum phase systems. The control system can augment an existing, non-adaptive control design without modifying the gains employed in that design. The control system can also avoid the use of high gains in both the observer design and the adaptive control law.

  3. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  4. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  5. Kernels, Degrees of Freedom, and Power Properties of Quadratic Distance Goodness-of-Fit Tests

    PubMed Central

    Lindsay, Bruce G.; Markatou, Marianthi; Ray, Surajit

    2014-01-01

    In this article, we study the power properties of quadratic-distance-based goodness-of-fit tests. First, we introduce the concept of a root kernel and discuss the considerations that enter the selection of this kernel. We derive an easy to use normal approximation to the power of quadratic distance goodness-of-fit tests and base the construction of a noncentrality index, an analogue of the traditional noncentrality parameter, on it. This leads to a method akin to the Neyman-Pearson lemma for constructing optimal kernels for specific alternatives. We then introduce a midpower analysis as a device for choosing optimal degrees of freedom for a family of alternatives of interest. Finally, we introduce a new diffusion kernel, called the Pearson-normal kernel, and study the extent to which the normal approximation to the power of tests based on this kernel is valid. Supplementary materials for this article are available online. PMID:24764609

  6. Model-based online learning with kernels.

    PubMed

    Li, Guoqi; Wen, Changyun; Li, Zheng Guo; Zhang, Aimin; Yang, Feng; Mao, Kezhi

    2013-03-01

    New optimization models and algorithms for online learning with Kernels (OLK) in classification, regression, and novelty detection are proposed in a reproducing Kernel Hilbert space. Unlike the stochastic gradient descent algorithm, called the naive online Reg minimization algorithm (NORMA), OLK algorithms are obtained by solving a constrained optimization problem based on the proposed models. By exploiting the techniques of the Lagrange dual problem like Vapnik's support vector machine (SVM), the solution of the optimization problem can be obtained iteratively and the iteration process is similar to that of the NORMA. This further strengthens the foundation of OLK and enriches the research area of SVM. We also apply the obtained OLK algorithms to problems in classification, regression, and novelty detection, including real time background substraction, to show their effectiveness. It is illustrated that, based on the experimental results of both classification and regression, the accuracy of OLK algorithms is comparable with traditional SVM-based algorithms, such as SVM and least square SVM (LS-SVM), and with the state-of-the-art algorithms, such as Kernel recursive least square (KRLS) method and projectron method, while it is slightly higher than that of NORMA. On the other hand, the computational cost of the OLK algorithm is comparable with or slightly lower than existing online methods, such as above mentioned NORMA, KRLS, and projectron methods, but much lower than that of SVM-based algorithms. In addition, different from SVM and LS-SVM, it is possible for OLK algorithms to be applied to non-stationary problems. Also, the applicability of OLK in novelty detection is illustrated by simulation results.

  7. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  8. Lifting kernel-based sprite codec

    NASA Astrophysics Data System (ADS)

    Dasu, Aravind R.; Panchanathan, Sethuraman

    2000-12-01

    The International Standards Organization (ISO) has proposed a family of standards for compression of image and video sequences, including the JPEG, MPEG-1 and MPEG-2. The latest MPEG-4 standard has many new dimensions to coding and manipulation of visual content. A video sequence usually contains a background object and many foreground objects. Portions of this background may not be visible in certain frames due to the occlusion of the foreground objects or camera motion. MPEG-4 introduces the novel concepts of Video Object Planes (VOPs) and Sprites. A VOP is a visual representation of real world objects with shapes that need not be rectangular. Sprite is a large image composed of pixels belonging to a video object visible throughout a video segment. Since a sprite contains all parts of the background that were at least visible once, it can be used for direct reconstruction of the background Video Object Plane (VOP). Sprite reconstruction is dependent on the mode in which it is transmitted. In the Static sprite mode, the entire sprite is decoded as an Intra VOP before decoding the individual VOPs. Since sprites consist of the information needed to display multiple frames of a video sequence, they are typically much larger than a single frame of video. Therefore a static sprite can be considered as a large static image. In this paper, a novel solution to address the problem of spatial scalability has been proposed, where the sprite is encoded in Discrete Wavelet Transform (DWT). A lifting kernel method of DWT implementation has been used for encoding and decoding sprites. Modifying the existing lifting scheme while maintaining it to be shape adaptive results in a reduced complexity. The proposed scheme has the advantages of (1) avoiding the need for any extensions to image or tile border pixels and is hence superior to the DCT based low latency scheme (used in the current MPEG-4 verification model), (2) mapping the in place computed wavelet coefficients into a zero

  9. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  10. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    PubMed

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  11. Adaptive entropy-constrained discontinuous Galerkin method for simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Lv, Yu; Ihme, Matthias

    2015-11-01

    A robust and adaptive computational framework will be presented for high-fidelity simulations of turbulent flows based on the discontinuous Galerkin (DG) scheme. For this, an entropy-residual based adaptation indicator is proposed to enable adaptation in polynomial and physical space. The performance and generality of this entropy-residual indicator is evaluated through direct comparisons with classical indicators. In addition, a dynamic load balancing procedure is developed to improve computational efficiency. The adaptive framework is tested by considering a series of turbulent test cases, which include homogeneous isotropic turbulence, channel flow and flow-over-a-cylinder. The accuracy, performance and scalability are assessed, and the benefit of this adaptive high-order method is discussed. The funding from NSF CAREER award is greatly acknowledged.

  12. A high-throughput multiplex method adapted for GMO detection.

    PubMed

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  13. An Adaptive Kalman Filter using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  14. An Adaptive Kalman Filter Using a Simple Residual Tuning Method

    NASA Technical Reports Server (NTRS)

    Harman, Richard R.

    1999-01-01

    One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.

  15. The Pilates method and cardiorespiratory adaptation to training.

    PubMed

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.

  16. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  17. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  18. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  19. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AND ORDERS; FRUITS, VEGETABLES, NUTS), DEPARTMENT OF AGRICULTURE ALMONDS GROWN IN CALIFORNIA... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... Standards for Shelled Almonds, or which has embedded dirt or other foreign material not easily removed...

  20. Travel-Time and Amplitude Sensitivity Kernels

    DTIC Science & Technology

    2011-09-01

    amplitude sensitivity kernels shown in the lower panels concentrate about the corresponding eigenrays . Each 3D kernel exhibits a broad negative...in 2 and 3 dimensions have similar 11 shapes to corresponding travel-time sensitivity kernels (TSKs), centered about the respective eigenrays

  1. Restrictive Stochastic Item Selection Methods in Cognitive Diagnostic Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Wang, Chun; Chang, Hua-Hua; Huebner, Alan

    2011-01-01

    This paper proposes two new item selection methods for cognitive diagnostic computerized adaptive testing: the restrictive progressive method and the restrictive threshold method. They are built upon the posterior weighted Kullback-Leibler (KL) information index but include additional stochastic components either in the item selection index or in…

  2. Robust kernel collaborative representation for face recognition

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Wang, Xiaohui; Ma, Yanbo; Jiang, Yuzheng; Zhu, Yinghui; Jin, Zhong

    2015-05-01

    One of the greatest challenges of representation-based face recognition is that the training samples are usually insufficient. In other words, the training set usually does not include enough samples to show varieties of high-dimensional face images caused by illuminations, facial expressions, and postures. When the test sample is significantly different from the training samples of the same subject, the recognition performance will be sharply reduced. We propose a robust kernel collaborative representation based on virtual samples for face recognition. We think that the virtual training set conveys some reasonable and possible variations of the original training samples. Hence, we design a new object function to more closely match the representation coefficients generated from the original and virtual training sets. In order to further improve the robustness, we implement the corresponding representation-based face recognition in kernel space. It is noteworthy that any kind of virtual training samples can be used in our method. We use noised face images to obtain virtual face samples. The noise can be approximately viewed as a reflection of the varieties of illuminations, facial expressions, and postures. Our work is a simple and feasible way to obtain virtual face samples to impose Gaussian noise (and other types of noise) specifically to the original training samples to obtain possible variations of the original samples. Experimental results on the FERET, Georgia Tech, and ORL face databases show that the proposed method is more robust than two state-of-the-art face recognition methods, such as CRC and Kernel CRC.

  3. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  4. Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Ming; Chen, Xiao-Fei

    2001-03-01

    Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.

  5. Verification of Chare-kernel programs

    SciTech Connect

    Bhansali, S.; Kale, L.V. )

    1989-01-01

    Experience with concurrent programming has shown that concurrent programs can conceal bugs even after extensive testing. Thus, there is a need for practical techniques which can establish the correctness of parallel programs. This paper proposes a method for showing how to prove the partial correctness of programs written in the Chare-kernel language, which is a language designed to support the parallel execution of computation with irregular structures. The proof is based on the lattice proof technique and is divided into two parts. The first part is concerned with the program behavior within a single chare instance, whereas the second part captures the inter-chare interaction.

  6. A one-class kernel fisher criterion for outlier detection.

    PubMed

    Dufrenois, Franck

    2015-05-01

    Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which can be solved using an iterative perturbation method. Once the label vector is estimated, an optimal projection subspace is obtained by solving a generalized eigenvalue problem. Like many other kernel methods, the performance of the proposed approach depends on the choice of the kernel. Constructed with a Gaussian kernel, I show that the proposed contrast measure is an efficient indicator for selecting an optimal kernel width. This property simplifies the model selection problem which is typically solved by costly (generalized) cross-validation procedures. Initialization, convergence analysis, and computational complexity are also discussed. Lastly, the proposed algorithm is compared with recent novelty detectors on synthetic and real data sets.

  7. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    NASA Astrophysics Data System (ADS)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  8. Face detection based on multiple kernel learning algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  9. Spine labeling in axial magnetic resonance imaging via integral kernels.

    PubMed

    Miles, Brandon; Ben Ayed, Ismail; Hojjat, Seyed-Parsa; Wang, Michael H; Li, Shuo; Fenster, Aaron; Garvin, Gregory J

    2016-12-01

    This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and disc structures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels, where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifications of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embeds geometric priors based on anatomical measurements of the spine. Our classifier requires evaluations of computationally expensive integrals at each pixel, and direct evaluations of such integrals would be prohibitively time consuming. We propose an efficient computation of kernel density estimates and PPK evaluations for large images and arbitrary local window sizes via integral kernels. Our method requires a single user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32 patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice classification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has not been reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first to use integral kernels in the context of medical images.

  10. Search Control Algorithm Based on Random Step Size Hill-Climbing Method for Adaptive PMD Compensation

    NASA Astrophysics Data System (ADS)

    Tanizawa, Ken; Hirose, Akira

    Adaptive polarization mode dispersion (PMD) compensation is required for the speed-up and advancement of the present optical communications. The combination of a tunable PMD compensator and its adaptive control method achieves adaptive PMD compensation. In this paper, we report an effective search control algorithm for the feedback control of the PMD compensator. The algorithm is based on the hill-climbing method. However, the step size changes randomly to prevent the convergence from being trapped at a local maximum or a flat, unlike the conventional hill-climbing method. The randomness depends on the Gaussian probability density functions. We conducted transmission simulations at 160Gb/s and the results show that the proposed method provides more optimal compensator control than the conventional hill-climbing method.

  11. A Massively Parallel Adaptive Fast Multipole Method on Heterogeneous Architectures

    SciTech Connect

    Lashuk, Ilya; Chandramowlishwaran, Aparna; Langston, Harper; Nguyen, Tuan-Anh; Sampath, Rahul S; Shringarpure, Aashay; Vuduc, Richard; Ying, Lexing; Zorin, Denis; Biros, George

    2012-01-01

    We describe a parallel fast multipole method (FMM) for highly nonuniform distributions of particles. We employ both distributed memory parallelism (via MPI) and shared memory parallelism (via OpenMP and GPU acceleration) to rapidly evaluate two-body nonoscillatory potentials in three dimensions on heterogeneous high performance computing architectures. We have performed scalability tests with up to 30 billion particles on 196,608 cores on the AMD/CRAY-based Jaguar system at ORNL. On a GPU-enabled system (NSF's Keeneland at Georgia Tech/ORNL), we observed 30x speedup over a single core CPU and 7x speedup over a multicore CPU implementation. By combining GPUs with MPI, we achieve less than 10 ns/particle and six digits of accuracy for a run with 48 million nonuniformly distributed particles on 192 GPUs.

  12. Adaptive bit truncation and compensation method for EZW image coding

    NASA Astrophysics Data System (ADS)

    Dai, Sheng-Kui; Zhu, Guangxi; Wang, Yao

    2003-09-01

    The embedded zero-tree wavelet algorithm (EZW) is widely adopted to compress wavelet coefficients of images with the property that the bits stream can be truncated and produced anywhere. The lower bit plane of the wavelet coefficents is verified to be less important than the higher bit plane. Therefore it can be truncated and not encoded. Based on experiments, a generalized function, which can provide a glancing guide for EZW encoder to intelligently decide the number of low bit plane to be truncated, is deduced in this paper. In the EZW decoder, a simple method is presented to compensate for the truncated wavelet coefficients, and finally it can surprisingly enhance the quality of reconstructed image and spend scarcely any additional cost at the same time.

  13. Delimiting Areas of Endemism through Kernel Interpolation

    PubMed Central

    Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971

  14. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  15. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    PubMed

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  16. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    PubMed Central

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios. PMID:27247562

  17. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings.

  18. Impedance adaptation methods of the piezoelectric energy harvesting

    NASA Astrophysics Data System (ADS)

    Kim, Hyeoungwoo

    In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling

  19. Kernel machine testing for risk prediction with stratified case cohort studies.

    PubMed

    Payne, Rebecca; Neykov, Matey; Jensen, Majken Karoline; Cai, Tianxi

    2016-06-01

    Large assembled cohorts with banked biospecimens offer valuable opportunities to identify novel markers for risk prediction. When the outcome of interest is rare, an effective strategy to conserve limited biological resources while maintaining reasonable statistical power is the case cohort (CCH) sampling design, in which expensive markers are measured on a subset of cases and controls. However, the CCH design introduces significant analytical complexity due to outcome-dependent, finite-population sampling. Current methods for analyzing CCH studies focus primarily on the estimation of simple survival models with linear effects; testing and estimation procedures that can efficiently capture complex non-linear marker effects for CCH data remain elusive. In this article, we propose inverse probability weighted (IPW) variance component type tests for identifying important marker sets through a Cox proportional hazards kernel machine (CoxKM) regression framework previously considered for full cohort studies (Cai et al., 2011). The optimal choice of kernel, while vitally important to attain high power, is typically unknown for a given dataset. Thus, we also develop robust testing procedures that adaptively combine information from multiple kernels. The proposed IPW test statistics have complex null distributions that cannot easily be approximated explicitly. Furthermore, due to the correlation induced by CCH sampling, standard resampling methods such as the bootstrap fail to approximate the distribution correctly. We, therefore, propose a novel perturbation resampling scheme that can effectively recover the induced correlation structure. Results from extensive simulation studies suggest that the proposed IPW CoxKM testing procedures work well in finite samples. The proposed methods are further illustrated by application to a Danish CCH study of Apolipoprotein C-III markers on the risk of coronary heart disease.

  20. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    ERIC Educational Resources Information Center

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  1. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    NASA Astrophysics Data System (ADS)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  2. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  3. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    NASA Astrophysics Data System (ADS)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  4. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  5. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  6. Nondiagonal Values of the Heat Kernel for Scalars in a Constant Electromagnetic Field

    NASA Astrophysics Data System (ADS)

    Kalinichenko, I. S.; Kazinski, P. O.

    2017-03-01

    An original method for finding the nondiagonal values of the heat kernel associated with the wave operator Fourier-transformed in time is proposed for the case of a constant external electromagnetic field. The connection of the trace of such a heat kernel to the one-loop correction to the grand thermodynamic potential is indicated. The structure of its singularities is analyzed.

  7. Resistant-starch Formation in High-amylose Maize Starch During Kernel Development

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objective of this study was to understand the resistant-starch (RS) formation during the kernel development of high-amylose maize, GEMS-0067 line. RS content of the starch, determined using AOAC Method 991.43 for total dietary fiber, increased with kernel maturation and the increase in amylose/...

  8. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    PubMed

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function.

  9. Travel-time sensitivity kernels in long-range propagation.

    PubMed

    Skarsoulis, E K; Cornuelle, B D; Dzieciuch, M A

    2009-11-01

    Wave-theoretic travel-time sensitivity kernels (TSKs) are calculated in two-dimensional (2D) and three-dimensional (3D) environments and their behavior with increasing propagation range is studied and compared to that of ray-theoretic TSKs and corresponding Fresnel-volumes. The differences between the 2D and 3D TSKs average out when horizontal or cross-range marginals are considered, which indicates that they are not important in the case of range-independent sound-speed perturbations or perturbations of large scale compared to the lateral TSK extent. With increasing range, the wave-theoretic TSKs expand in the horizontal cross-range direction, their cross-range extent being comparable to that of the corresponding free-space Fresnel zone, whereas they remain bounded in the vertical. Vertical travel-time sensitivity kernels (VTSKs)-one-dimensional kernels describing the effect of horizontally uniform sound-speed changes on travel-times-are calculated analytically using a perturbation approach, and also numerically, as horizontal marginals of the corresponding TSKs. Good agreement between analytical and numerical VTSKs, as well as between 2D and 3D VTSKs, is found. As an alternative method to obtain wave-theoretic sensitivity kernels, the parabolic approximation is used; the resulting TSKs and VTSKs are in good agreement with normal-mode results. With increasing range, the wave-theoretic VTSKs approach the corresponding ray-theoretic sensitivity kernels.

  10. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  11. Five Methods to Score the Teacher Observation of Classroom Adaptation Checklist and to Examine Group Differences

    ERIC Educational Resources Information Center

    Wang, Ze; Rohrer, David; Chuang, Chi-ching; Fujiki, Mayo; Herman, Keith; Reinke, Wendy

    2015-01-01

    This study compared 5 scoring methods in terms of their statistical assumptions. They were then used to score the Teacher Observation of Classroom Adaptation Checklist, a measure consisting of 3 subscales and 21 Likert-type items. The 5 methods used were (a) sum/average scores of items, (b) latent factor scores with continuous indicators, (c)…

  12. An adaptive, formally second order accurate version of the immersed boundary method

    NASA Astrophysics Data System (ADS)

    Griffith, Boyce E.; Hornung, Richard D.; McQueen, David M.; Peskin, Charles S.

    2007-04-01

    Like many problems in biofluid mechanics, cardiac mechanics can be modeled as the dynamic interaction of a viscous incompressible fluid (the blood) and a (visco-)elastic structure (the muscular walls and the valves of the heart). The immersed boundary method is a mathematical formulation and numerical approach to such problems that was originally introduced to study blood flow through heart valves, and extensions of this work have yielded a three-dimensional model of the heart and great vessels. In the present work, we introduce a new adaptive version of the immersed boundary method. This adaptive scheme employs the same hierarchical structured grid approach (but a different numerical scheme) as the two-dimensional adaptive immersed boundary method of Roma et al. [A multilevel self adaptive version of the immersed boundary method, Ph.D. Thesis, Courant Institute of Mathematical Sciences, New York University, 1996; An adaptive version of the immersed boundary method, J. Comput. Phys. 153 (2) (1999) 509-534] and is based on a formally second order accurate (i.e., second order accurate for problems with sufficiently smooth solutions) version of the immersed boundary method that we have recently described [B.E. Griffith, C.S. Peskin, On the order of accuracy of the immersed boundary method: higher order convergence rates for sufficiently smooth problems, J. Comput. Phys. 208 (1) (2005) 75-105]. Actual second order convergence rates are obtained for both the uniform and adaptive methods by considering the interaction of a viscous incompressible flow and an anisotropic incompressible viscoelastic shell. We also present initial results from the application of this methodology to the three-dimensional simulation of blood flow in the heart and great vessels. The results obtained by the adaptive method show good qualitative agreement with simulation results obtained by earlier non-adaptive versions of the method, but the flow in the vicinity of the model heart valves

  13. Adaptive methods: when and how should they be used in clinical trials?

    PubMed

    Porcher, Raphaël; Lecocq, Brigitte; Vray, Muriel

    2011-01-01

    Adaptive clinical trial designs are defined as designs that use data cumulated during trial to possibly modify certain aspects without compromising the validity and integrity of the said trial. Compared to more traditional trials, in theory, adaptive designs allow the same information to be generated but in a more efficient manner. The advantages and limits of this type of design together with the weight of the constraints, in particular of a logistic nature, that their use implies, differ depending on whether the trial is exploratory or confirmatory with a view to registration. One of the key elements ensuring trial integrity is the involvement of an independent committee to determine adaptations in terms of experimental design during the study. Adaptive methods for clinical trials are appealing and may be accepted by the relevant authorities. However, the constraints that they impose must be determined well in advance.

  14. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  15. Adaptive remeshing method in 2D based on refinement and coarsening techniques

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.

    2007-04-01

    The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.

  16. Estimating the Importance of Private Adaptation to Climate Change in Agriculture: A Review of Empirical Methods

    NASA Astrophysics Data System (ADS)

    Moore, F.; Burke, M.

    2015-12-01

    A wide range of studies using a variety of methods strongly suggest that climate change will have a negative impact on agricultural production in many areas. Farmers though should be able to learn about a changing climate and to adjust what they grow and how they grow it in order to reduce these negative impacts. However, it remains unclear how effective these private (autonomous) adaptations will be, or how quickly they will be adopted. Constraining the uncertainty on this adaptation is important for understanding the impacts of climate change on agriculture. Here we review a number of empirical methods that have been proposed for understanding the rate and effectiveness of private adaptation to climate change. We compare these methods using data on agricultural yields in the United States and western Europe.

  17. Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk

    2016-09-01

    This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).

  18. Weighted adaptive threshold estimating method and its application to Satellite-to-Ground optical communications

    NASA Astrophysics Data System (ADS)

    Ran, Qiwen; Yang, Zhonghua; Ma, Jing; Tan, Liying; Liao, Huixi; Liu, Qingfeng

    2013-02-01

    In this paper, a weighted adaptive threshold estimating method is proposed to deal with long and deep channel fades in Satellite-to-Ground optical communications. During the channel correlation interval where there are sufficient correlations in adjacent signal samples, the correlations in its change rates are described by weighted equations in the form of Toeplitz matrix. As vital inputs to the proposed adaptive threshold estimator, the optimal values of the change rates can be obtained by solving the weighted equation systems. The effect of channel fades and aberrant samples can be mitigated by joint use of weighted equation systems and Kalman estimation. Based on the channel information data from star observation trails, simulations are made and the numerical results show that the proposed method have better anti-fade performances than the D-value adaptive threshold estimating method in both weak and strong turbulence conditions.

  19. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    PubMed Central

    Mhaidat, Fatin

    2016-01-01

    This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades) enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure) and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. PMID:27175098

  20. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance.