Sample records for radial basis kernel

  1. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  2. An SVM model with hybrid kernels for hydrological time series

    NASA Astrophysics Data System (ADS)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  3. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    NASA Astrophysics Data System (ADS)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  4. Vowel Imagery Decoding toward Silent Speech BCI Using Extreme Learning Machine with Electroencephalogram

    PubMed Central

    Kim, Jongin; Park, Hyeong-jun

    2016-01-01

    The purpose of this study is to classify EEG data on imagined speech in a single trial. We recorded EEG data while five subjects imagined different vowels, /a/, /e/, /i/, /o/, and /u/. We divided each single trial dataset into thirty segments and extracted features (mean, variance, standard deviation, and skewness) from all segments. To reduce the dimension of the feature vector, we applied a feature selection algorithm based on the sparse regression model. These features were classified using a support vector machine with a radial basis function kernel, an extreme learning machine, and two variants of an extreme learning machine with different kernels. Because each single trial consisted of thirty segments, our algorithm decided the label of the single trial by selecting the most frequent output among the outputs of the thirty segments. As a result, we observed that the extreme learning machine and its variants achieved better classification rates than the support vector machine with a radial basis function kernel and linear discrimination analysis. Thus, our results suggested that EEG responses to imagined speech could be successfully classified in a single trial using an extreme learning machine with a radial basis function and linear kernel. This study with classification of imagined speech might contribute to the development of silent speech BCI systems. PMID:28097128

  5. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  6. An algorithm of improving speech emotional perception for hearing aid

    NASA Astrophysics Data System (ADS)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  7. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  8. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  9. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  10. Quantized kernel least mean square algorithm.

    PubMed

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  11. Integrating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  12. a Gsa-Svm Hybrid System for Classification of Binary Problems

    NASA Astrophysics Data System (ADS)

    Sarafrazi, Soroor; Nezamabadi-pour, Hossein; Barahman, Mojgan

    2011-06-01

    This paperhybridizesgravitational search algorithm (GSA) with support vector machine (SVM) and made a novel GSA-SVM hybrid system to improve the classification accuracy in binary problems. GSA is an optimization heuristic toolused to optimize the value of SVM kernel parameter (in this paper, radial basis function (RBF) is chosen as the kernel function). The experimental results show that this newapproach can achieve high classification accuracy and is comparable to or better than the particle swarm optimization (PSO)-SVM and genetic algorithm (GA)-SVM, which are two hybrid systems for classification.

  13. GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China

    NASA Astrophysics Data System (ADS)

    Xu, Chong; Dai, Fuchu; Xu, Xiwei; Lee, Yuan Hsi

    2012-04-01

    Support vector machine (SVM) modeling is based on statistical learning theory. It involves a training phase with associated input and target output values. In recent years, the method has become increasingly popular. The main purpose of this study is to evaluate the mapping power of SVM modeling in earthquake triggered landslide-susceptibility mapping for a section of the Jianjiang River watershed using a Geographic Information System (GIS) software. The river was affected by the Wenchuan earthquake of May 12, 2008. Visual interpretation of colored aerial photographs of 1-m resolution and extensive field surveys provided a detailed landslide inventory map containing 3147 landslides related to the 2008 Wenchuan earthquake. Elevation, slope angle, slope aspect, distance from seismogenic faults, distance from drainages, and lithology were used as the controlling parameters. For modeling, three groups of positive and negative training samples were used in concert with four different kernel functions. Positive training samples include the centroids of 500 large landslides, those of all 3147 landslides, and 5000 randomly selected points in landslide polygons. Negative training samples include 500, 3147, and 5000 randomly selected points on slopes that remained stable during the Wenchuan earthquake. The four kernel functions are linear, polynomial, radial basis, and sigmoid. In total, 12 cases of landslide susceptibility were mapped. Comparative analyses of landslide-susceptibility probability and area relation curves show that both the polynomial and radial basis functions suitably classified the input data as either landslide positive or negative though the radial basis function was more successful. The 12 generated landslide-susceptibility maps were compared with known landslide centroid locations and landslide polygons to verify the success rate and predictive accuracy of each model. The 12 results were further validated using area-under-curve analysis. Group 3 with 5000 randomly selected points on the landslide polygons, and 5000 randomly selected points along stable slopes gave the best results with a success rate of 79.20% and predictive accuracy of 79.13% under the radial basis function. Of all the results, the sigmoid kernel function was the least skillful when used in concert with the centroid data of all 3147 landslides as positive training samples, and the negative training samples of 3147 randomly selected points in regions of stable slope (success rate = 54.95%; predictive accuracy = 61.85%). This paper also provides suggestions and reference data for selecting appropriate training samples and kernel function types for earthquake triggered landslide-susceptibility mapping using SVM modeling. Predictive landslide-susceptibility maps could be useful in hazard mitigation by helping planners understand the probability of landslides in different regions.

  14. Earth Structure, Ice Mass Changes, and the Local Dynamic Geoid

    NASA Astrophysics Data System (ADS)

    Harig, C.; Simons, F. J.

    2014-12-01

    Spherical Slepian localization functions are a useful method for studying regional mass changes observed by satellite gravimetry. By projecting data onto a sparse basis set, the local field can be estimated more easily than with the full spherical harmonic basis. We have used this method previously to estimate the ice mass change in Greenland from GRACE data, and it can also be applied to other planetary problems such as global magnetic fields. Earth's static geoid, in contrast to the time-variable field, is in large part related to the internal density and rheological structure of the Earth. Past studies have used dynamic geoid kernels to relate this density structure and the internal deformation it induces to the surface geopotential at large scales. These now classical studies of the eighties and nineties were able to estimate the mantle's radial rheological profile, placing constraints on the ratio between upper and lower mantle viscosity. By combining these two methods, spherical Slepian localization and dynamic geoid kernels, we have created local dynamic geoid kernels which are sensitive only to density variations within an area of interest. With these kernels we can estimate the approximate local radial rheological structure that best explains the locally observed geoid on a regional basis. First-order differences of the regional mantle viscosity structure are accessible to this technique. In this contribution we present our latest, as yet unpublished results on the geographical and temporal pattern of ice mass changes in Antarctica over the past decade, and we introduce a new approach to extract regional information about the internal structure of the Earth from the static global gravity field. Both sets of results are linked in terms of the relevant physics, but also in being developed from the marriage of Slepian functions and geoid kernels. We make predictions on the utility of our approach to derive fully three-dimensional rheological Earth models, to be used for corrections for glacio-isostatic adjustment, as necessary for the interpretation of time-variable gravity observations in terms of ice sheet mass-balance studies.

  15. Stable computations with flat radial basis functions using vector-valued rational approximations

    NASA Astrophysics Data System (ADS)

    Wright, Grady B.; Fornberg, Bengt

    2017-02-01

    One commonly finds in applications of smooth radial basis functions (RBFs) that scaling the kernels so they are 'flat' leads to smaller discretization errors. However, the direct numerical approach for computing with flat RBFs (RBF-Direct) is severely ill-conditioned. We present an algorithm for bypassing this ill-conditioning that is based on a new method for rational approximation (RA) of vector-valued analytic functions with the property that all components of the vector share the same singularities. This new algorithm (RBF-RA) is more accurate, robust, and easier to implement than the Contour-Padé method, which is similarly based on vector-valued rational approximation. In contrast to the stable RBF-QR and RBF-GA algorithms, which are based on finding a better conditioned base in the same RBF-space, the new algorithm can be used with any type of smooth radial kernel, and it is also applicable to a wider range of tasks (including calculating Hermite type implicit RBF-FD stencils). We present a series of numerical experiments demonstrating the effectiveness of this new method for computing RBF interpolants in the flat regime. We also demonstrate the flexibility of the method by using it to compute implicit RBF-FD formulas in the flat regime and then using these for solving Poisson's equation in a 3-D spherical shell.

  16. Bands selection and classification of hyperspectral images based on hybrid kernels SVM by evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Yan-Yan; Li, Dong-Sheng

    2016-01-01

    The hyperspectral images(HSI) consist of many closely spaced bands carrying the most object information. While due to its high dimensionality and high volume nature, it is hard to get satisfactory classification performance. In order to reduce HSI data dimensionality preparation for high classification accuracy, it is proposed to combine a band selection method of artificial immune systems (AIS) with a hybrid kernels support vector machine (SVM-HK) algorithm. In fact, after comparing different kernels for hyperspectral analysis, the approach mixed radial basis function kernel (RBF-K) with sigmoid kernel (Sig-K) and applied the optimized hybrid kernels in SVM classifiers. Then the SVM-HK algorithm used to induce the bands selection of an improved version of AIS. The AIS was composed of clonal selection and elite antibody mutation, including evaluation process with optional index factor (OIF). Experimental classification performance was on a San Diego Naval Base acquired by AVIRIS, the HRS dataset shows that the method is able to efficiently achieve bands redundancy removal while outperforming the traditional SVM classifier.

  17. On supervised graph Laplacian embedding CA model & kernel construction and its application

    NASA Astrophysics Data System (ADS)

    Zeng, Junwei; Qian, Yongsheng; Wang, Min; Yang, Yongzhong

    2017-01-01

    There are many methods to construct kernel with given data attribute information. Gaussian radial basis function (RBF) kernel is one of the most popular ways to construct a kernel. The key observation is that in real-world data, besides the data attribute information, data label information also exists, which indicates the data class. In order to make use of both data attribute information and data label information, in this work, we propose a supervised kernel construction method. Supervised information from training data is integrated into standard kernel construction process to improve the discriminative property of resulting kernel. A supervised Laplacian embedding cellular automaton model is another key application developed for two-lane heterogeneous traffic flow with the safe distance and large-scale truck. Based on the properties of traffic flow in China, we re-calibrate the cell length, velocity, random slowing mechanism and lane-change conditions and use simulation tests to study the relationships among the speed, density and flux. The numerical results show that the large-scale trucks will have great effects on the traffic flow, which are relevant to the proportion of the large-scale trucks, random slowing rate and the times of the lane space change.

  18. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  19. A multi-label learning based kernel automatic recommendation method for support vector machine.

    PubMed

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  20. A Multi-Label Learning Based Kernel Automatic Recommendation Method for Support Vector Machine

    PubMed Central

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance. PMID:25893896

  1. Mesh-free based variational level set evolution for breast region segmentation and abnormality detection using mammograms.

    PubMed

    Kashyap, Kanchan L; Bajpai, Manish K; Khanna, Pritee; Giakos, George

    2018-01-01

    Automatic segmentation of abnormal region is a crucial task in computer-aided detection system using mammograms. In this work, an automatic abnormality detection algorithm using mammographic images is proposed. In the preprocessing step, partial differential equation-based variational level set method is used for breast region extraction. The evolution of the level set method is done by applying mesh-free-based radial basis function (RBF). The limitation of mesh-based approach is removed by using mesh-free-based RBF method. The evolution of variational level set function is also done by mesh-based finite difference method for comparison purpose. Unsharp masking and median filtering is used for mammogram enhancement. Suspicious abnormal regions are segmented by applying fuzzy c-means clustering. Texture features are extracted from the segmented suspicious regions by computing local binary pattern and dominated rotated local binary pattern (DRLBP). Finally, suspicious regions are classified as normal or abnormal regions by means of support vector machine with linear, multilayer perceptron, radial basis, and polynomial kernel function. The algorithm is validated on 322 sample mammograms of mammographic image analysis society (MIAS) and 500 mammograms from digital database for screening mammography (DDSM) datasets. Proficiency of the algorithm is quantified by using sensitivity, specificity, and accuracy. The highest sensitivity, specificity, and accuracy of 93.96%, 95.01%, and 94.48%, respectively, are obtained on MIAS dataset using DRLBP feature with RBF kernel function. Whereas, the highest 92.31% sensitivity, 98.45% specificity, and 96.21% accuracy are achieved on DDSM dataset using DRLBP feature with RBF kernel function. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.

    PubMed

    Lima, Clodoaldo A M; Coelho, André L V

    2011-10-01

    We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.

  3. [Study on application of SVM in prediction of coronary heart disease].

    PubMed

    Zhu, Yue; Wu, Jianghua; Fang, Ying

    2013-12-01

    Base on the data of blood pressure, plasma lipid, Glu and UA by physical test, Support Vector Machine (SVM) was applied to identify coronary heart disease (CHD) in patients and non-CHD individuals in south China population for guide of further prevention and treatment of the disease. Firstly, the SVM classifier was built using radial basis kernel function, liner kernel function and polynomial kernel function, respectively. Secondly, the SVM penalty factor C and kernel parameter sigma were optimized by particle swarm optimization (PSO) and then employed to diagnose and predict the CHD. By comparison with those from artificial neural network with the back propagation (BP) model, linear discriminant analysis, logistic regression method and non-optimized SVM, the overall results of our calculation demonstrated that the classification performance of optimized RBF-SVM model could be superior to other classifier algorithm with higher accuracy rate, sensitivity and specificity, which were 94.51%, 92.31% and 96.67%, respectively. So, it is well concluded that SVM could be used as a valid method for assisting diagnosis of CHD.

  4. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  5. A low cost implementation of multi-parameter patient monitor using intersection kernel support vector machine classifier

    NASA Astrophysics Data System (ADS)

    Mohan, Dhanya; Kumar, C. Santhosh

    2016-03-01

    Predicting the physiological condition (normal/abnormal) of a patient is highly desirable to enhance the quality of health care. Multi-parameter patient monitors (MPMs) using heart rate, arterial blood pressure, respiration rate and oxygen saturation (S pO2) as input parameters were developed to monitor the condition of patients, with minimum human resource utilization. The Support vector machine (SVM), an advanced machine learning approach popularly used for classification and regression is used for the realization of MPMs. For making MPMs cost effective, we experiment on the hardware implementation of the MPM using support vector machine classifier. The training of the system is done using the matlab environment and the detection of the alarm/noalarm condition is implemented in hardware. We used different kernels for SVM classification and note that the best performance was obtained using intersection kernel SVM (IKSVM). The intersection kernel support vector machine classifier MPM has outperformed the best known MPM using radial basis function kernel by an absoute improvement of 2.74% in accuracy, 1.86% in sensitivity and 3.01% in specificity. The hardware model was developed based on the improved performance system using Verilog Hardware Description Language and was implemented on Altera cyclone-II development board.

  6. Discrimination of raw and processed Dipsacus asperoides by near infrared spectroscopy combined with least squares-support vector machine and random forests

    NASA Astrophysics Data System (ADS)

    Xin, Ni; Gu, Xiao-Feng; Wu, Hao; Hu, Yu-Zhu; Yang, Zhong-Lin

    2012-04-01

    Most herbal medicines could be processed to fulfill the different requirements of therapy. The purpose of this study was to discriminate between raw and processed Dipsacus asperoides, a common traditional Chinese medicine, based on their near infrared (NIR) spectra. Least squares-support vector machine (LS-SVM) and random forests (RF) were employed for full-spectrum classification. Three types of kernels, including linear kernel, polynomial kernel and radial basis function kernel (RBF), were checked for optimization of LS-SVM model. For comparison, a linear discriminant analysis (LDA) model was performed for classification, and the successive projections algorithm (SPA) was executed prior to building an LDA model to choose an appropriate subset of wavelengths. The three methods were applied to a dataset containing 40 raw herbs and 40 corresponding processed herbs. We ran 50 runs of 10-fold cross validation to evaluate the model's efficiency. The performance of the LS-SVM with RBF kernel (RBF LS-SVM) was better than the other two kernels. The RF, RBF LS-SVM and SPA-LDA successfully classified all test samples. The mean error rates for the 50 runs of 10-fold cross validation were 1.35% for RBF LS-SVM, 2.87% for RF, and 2.50% for SPA-LDA. The best classification results were obtained by using LS-SVM with RBF kernel, while RF was fast in the training and making predictions.

  7. Flood susceptibility mapping using a novel ensemble weights-of-evidence and support vector machine models in GIS

    NASA Astrophysics Data System (ADS)

    Tehrany, Mahyat Shafapour; Pradhan, Biswajeet; Jebur, Mustafa Neamah

    2014-05-01

    Flood is one of the most devastating natural disasters that occur frequently in Terengganu, Malaysia. Recently, ensemble based techniques are getting extremely popular in flood modeling. In this paper, weights-of-evidence (WoE) model was utilized first, to assess the impact of classes of each conditioning factor on flooding through bivariate statistical analysis (BSA). Then, these factors were reclassified using the acquired weights and entered into the support vector machine (SVM) model to evaluate the correlation between flood occurrence and each conditioning factor. Through this integration, the weak point of WoE can be solved and the performance of the SVM will be enhanced. The spatial database included flood inventory, slope, stream power index (SPI), topographic wetness index (TWI), altitude, curvature, distance from the river, geology, rainfall, land use/cover (LULC), and soil type. Four kernel types of SVM (linear kernel (LN), polynomial kernel (PL), radial basis function kernel (RBF), and sigmoid kernel (SIG)) were used to investigate the performance of each kernel type. The efficiency of the new ensemble WoE and SVM method was tested using area under curve (AUC) which measured the prediction and success rates. The validation results proved the strength and efficiency of the ensemble method over the individual methods. The best results were obtained from RBF kernel when compared with the other kernel types. Success rate and prediction rate for ensemble WoE and RBF-SVM method were 96.48% and 95.67% respectively. The proposed ensemble flood susceptibility mapping method could assist researchers and local governments in flood mitigation strategies.

  8. Evaluating the Gradient of the Thin Wire Kernel

    NASA Technical Reports Server (NTRS)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  9. A Prototype SSVEP Based Real Time BCI Gaming System

    PubMed Central

    Martišius, Ignas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel. PMID:27051414

  10. A Prototype SSVEP Based Real Time BCI Gaming System.

    PubMed

    Martišius, Ignas; Damaševičius, Robertas

    2016-01-01

    Although brain-computer interface technology is mainly designed with disabled people in mind, it can also be beneficial to healthy subjects, for example, in gaming or virtual reality systems. In this paper we discuss the typical architecture, paradigms, requirements, and limitations of electroencephalogram-based gaming systems. We have developed a prototype three-class brain-computer interface system, based on the steady state visually evoked potentials paradigm and the Emotiv EPOC headset. An online target shooting game, implemented in the OpenViBE environment, has been used for user feedback. The system utilizes wave atom transform for feature extraction, achieving an average accuracy of 78.2% using linear discriminant analysis classifier, 79.3% using support vector machine classifier with a linear kernel, and 80.5% using a support vector machine classifier with a radial basis function kernel.

  11. Detection of Splice Sites Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Varadwaj, Pritish; Purohit, Neetesh; Arora, Bhumika

    Automatic identification and annotation of exon and intron region of gene, from DNA sequences has been an important research area in field of computational biology. Several approaches viz. Hidden Markov Model (HMM), Artificial Intelligence (AI) based machine learning and Digital Signal Processing (DSP) techniques have extensively and independently been used by various researchers to cater this challenging task. In this work, we propose a Support Vector Machine based kernel learning approach for detection of splice sites (the exon-intron boundary) in a gene. Electron-Ion Interaction Potential (EIIP) values of nucleotides have been used for mapping character sequences to corresponding numeric sequences. Radial Basis Function (RBF) SVM kernel is trained using EIIP numeric sequences. Furthermore this was tested on test gene dataset for detection of splice site by window (of 12 residues) shifting. Optimum values of window size, various important parameters of SVM kernel have been optimized for a better accuracy. Receiver Operating Characteristic (ROC) curves have been utilized for displaying the sensitivity rate of the classifier and results showed 94.82% accuracy for splice site detection on test dataset.

  12. Comparing fixed and variable-width Gaussian networks.

    PubMed

    Kůrková, Věra; Kainen, Paul C

    2014-09-01

    The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Magnetic field of the Earth

    NASA Astrophysics Data System (ADS)

    Popov, Aleksey

    2013-04-01

    The magnetic field of the Earth has global meaning for a life on the Earth. The world geophysical science explains: - occurrence of a magnetic field of the Earth it is transformation of kinetic energy of movements of the fused iron in the liquid core of Earth - into the magnetic energy; - the warming up of a kernel of the Earth occurs due to radioactive disintegration of elements, with excretion of thermal energy. The world science does not define the reasons: - drift of a magnetic dipole on 0,2 a year to the West; - drift of lithospheric slabs and continents. The author offers: an alternative variant existing in a world science the theories "Geodynamo" - it is the theory « the Magnetic field of the Earth », created on the basis of physical laws. Education of a magnetic field of the Earth occurs at moving the electric charge located in a liquid kernel, at rotation of the Earth. At calculation of a magnetic field is used law the Bio Savara for a ring electric current: dB = . Magnetic induction in a kernel of the Earth: B = 2,58 Gs. According to the law of electromagnetic induction the Faradey, rotation of a iron kernel of the Earth in magnetic field causes occurrence of an electric field Emf which moves electrons from the center of a kernel towards the mantle. So of arise the radial electric currents. The magnetic field amplifies the iron of mantle and a kernel of the Earth. As a result of action of a radial electric field the electrons will flow from the center of a kernel in a layer of an electric charge. The central part of a kernel represents the field with a positive electric charge, which creates inverse magnetic field Binv and Emfinv When ?mfinv = ?mf ; ?inv = B, there will be an inversion a magnetic field of the Earth. It is a fact: drift of a magnetic dipole of the Earth in the western direction approximately 0,2 longitude, into a year. Radial electric currents a actions with the basic magnetic field of a Earth - it turn a kernel. It coincides with laws of electromagnetism. According to a rule of the left hand: if the magnetic field in a kernel is directed to drawing, electric current are directed to an axis of rotation of the Earth, - a action of force clockwise (to West). Definition of the force causing drift a kernel according to the law of Ampere F = IBlsin. Powerful force 3,5 × 1012 Nyton, what makes drift of the central part of a kernel of the Earth on 0,2 the longitude in year to West, and also it is engine of the mechanism of movement of slabs together with continents. Movement of a core of the Earth carry out around of a terrestrial axis one circulation in the western direction in 2000 of years. Linear speed of rotation of a kernel concerning a mantle on border the mantle a kernel: V = × 3,471 × 10 = 3,818 × 10 m/s = 33 m/day = 12 km/years. Considering greater viscosity of a mantle, the powerful energy at rotation of a kernel seize a mantle and lithospheric slabs and makes their collisions as a result of which there are earthquakes and volcano. Continents Northern and Southern America every year separate from the Europe and Africa on several centimeters. Atlantic ocean as a result of movement of these slabs with such speed was formed for 200 million years, that in comparison with the age of the Earth - several billions years, not so long time. Drift of a kernel in the western direction is a principal cause of delay of speed of rotation of the Earth. Flow of radial electric currents allot according to the law of Joule - Lenz, the quantity of warmth : Q = I2Rt = IUt, of thermal energy 6,92 × 1017 calories/year. This defines heating of a kernel and the Earth as a whole. In the valley of the median-Atlantic ridge having numerous volcanos, the lava flow constantly thus warm up waters of Atlantic ocean. It is a fact the warm current Gulf Stream. Thawing of a permafrost and ices of Arctic ocean, of glaciers of Greenland and Antarctica is acknowledgement: the warmth of earth defines character of thawing of glaciers and a permafrost. This is a global warming. The version of the author: the periods of inversion of a magnetic field of the Earth determine cycles of the Ice Age. At inversions of a magnetic field when B=0, radial electric currents are small or are absent, excretion of thermal energy minimally or an equal to zero,it is the beginning of the cooling the Earth and offensive of the Ice Age. Disappearance warm current Gulf Stream warming the north of the Europe and Canada. Drift of a magnetic dipole of the Earth in a rotation the opposite to rotation of the Earth, is acknowledgement of drift of a kernel of the Earth in a rotation the opposite to rotation of the Earth and is acknowledgement of the theory « the Magnetic field of the Earth ». The author continues to develop the theory « the Magnetic field of the Earth » and invites geophysicists to accept in it participation in it.

  14. Deep neural mapping support vector machines.

    PubMed

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. [Rapid identification of hogwash oil by using synchronous fluorescence spectroscopy].

    PubMed

    Sun, Yan-Hui; An, Hai-Yang; Jia, Xiao-Li; Wang, Juan

    2012-10-01

    To identify hogwash oil quickly, the characteristic delta lambda of hogwash oil was analyzed by three dimensional fluorescence spectroscopy with parallel factor analysis, and the model was built up by using synchronous fluorescence spectroscopy with support vector machines (SVM). The results showed that the characteristic delta lambda of hogwash oil was 60 nm. Collecting original spectrum of different samples under the condition of characteristic delta lambda 60 nm, the best model was established while 5 principal components were selected from original spectrum and the radial basis function (RBF) was used as the kernel function, and the optimal penalty factor C and kernel function g were 512 and 0.5 respectively obtained by the grid searching and 6-fold cross validation. The discrimination rate of the model was 100% for both training sets and prediction sets. Thus, it is quick and accurate to apply synchronous fluorescence spectroscopy to identification of hogwash oil.

  16. Flexible Kernel Memory

    PubMed Central

    Nowicki, Dimitri; Siegelmann, Hava

    2010-01-01

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces. PMID:20552013

  17. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize.

    PubMed

    Chen, Jiafa; Zhang, Luyan; Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed.

  18. The Genetic Basis of Natural Variation in Kernel Size and Related Traits Using a Four-Way Cross Population in Maize

    PubMed Central

    Liu, Songtao; Li, Zhimin; Huang, Rongrong; Li, Yongming; Cheng, Hongliang; Li, Xiantang; Zhou, Bo; Wu, Suowei; Chen, Wei; Wu, Jianyu; Ding, Junqiang

    2016-01-01

    Kernel size is an important component of grain yield in maize breeding programs. To extend the understanding on the genetic basis of kernel size traits (i.e., kernel length, kernel width and kernel thickness), we developed a set of four-way cross mapping population derived from four maize inbred lines with varied kernel sizes. In the present study, we investigated the genetic basis of natural variation in seed size and other components of maize yield (e.g., hundred kernel weight, number of rows per ear, number of kernels per row). In total, ten QTL affecting kernel size were identified, three of which (two for kernel length and one for kernel width) had stable expression in other components of maize yield. The possible genetic mechanism behind the trade-off of kernel size and yield components was discussed. PMID:27070143

  19. Signal detection using support vector machines in the presence of ultrasonic speckle

    NASA Astrophysics Data System (ADS)

    Kotropoulos, Constantine L.; Pitas, Ioannis

    2002-04-01

    Support Vector Machines are a general algorithm based on guaranteed risk bounds of statistical learning theory. They have found numerous applications, such as in classification of brain PET images, optical character recognition, object detection, face verification, text categorization and so on. In this paper we propose the use of support vector machines to segment lesions in ultrasound images and we assess thoroughly their lesion detection ability. We demonstrate that trained support vector machines with a Radial Basis Function kernel segment satisfactorily (unseen) ultrasound B-mode images as well as clinical ultrasonic images.

  20. 7 CFR 810.1403 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Basis of determination. Each determination of broken kernels and foreign material is made on the basis of the grain when free from dockage. Each determination of class, damaged kernels, heat-damaged kernels, and stones is made on the basis of the grain when free from dockage and that portion of the...

  1. SVM Classifier - a comprehensive java interface for support vector machine classification of microarray data.

    PubMed

    Pirooznia, Mehdi; Deng, Youping

    2006-12-12

    Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1-BRCA2 samples with RBF kernel of SVM. We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at http://mfgn.usm.edu/ebl/svm/.

  2. 7 CFR 810.2003 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Basis of determination. Each determination of heat-damaged kernels, damaged kernels, material other than... shrunken and broken kernels. Other determinations not specifically provided for under the general...

  3. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  4. 7 CFR 868.203 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Principles Governing..., heat-damaged kernels, red rice and damaged kernels, chalky kernels, other types, color, and the special grade Parboiled rough rice shall be on the basis of the whole and large broken kernels of milled rice...

  5. SVM-Based Synthetic Fingerprint Discrimination Algorithm and Quantitative Optimization Strategy

    PubMed Central

    Chen, Suhang; Chang, Sheng; Huang, Qijun; He, Jin; Wang, Hao; Huang, Qiangui

    2014-01-01

    Synthetic fingerprints are a potential threat to automatic fingerprint identification systems (AFISs). In this paper, we propose an algorithm to discriminate synthetic fingerprints from real ones. First, four typical characteristic factors—the ridge distance features, global gray features, frequency feature and Harris Corner feature—are extracted. Then, a support vector machine (SVM) is used to distinguish synthetic fingerprints from real fingerprints. The experiments demonstrate that this method can achieve a recognition accuracy rate of over 98% for two discrete synthetic fingerprint databases as well as a mixed database. Furthermore, a performance factor that can evaluate the SVM's accuracy and efficiency is presented, and a quantitative optimization strategy is established for the first time. After the optimization of our synthetic fingerprint discrimination task, the polynomial kernel with a training sample proportion of 5% is the optimized value when the minimum accuracy requirement is 95%. The radial basis function (RBF) kernel with a training sample proportion of 15% is a more suitable choice when the minimum accuracy requirement is 98%. PMID:25347063

  6. Influence of wheat kernel physical properties on the pulverizing process.

    PubMed

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p < 0.05) were found between wheat kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  7. Hyperspectral recognition of processing tomato early blight based on GA and SVM

    NASA Astrophysics Data System (ADS)

    Yin, Xiaojun; Zhao, SiFeng

    2013-03-01

    Processing tomato early blight seriously affect the yield and quality of its.Determine the leaves spectrum of different disease severity level of processing tomato early blight.We take the sensitive bands of processing tomato early blight as support vector machine input vector.Through the genetic algorithm(GA) to optimize the parameters of SVM, We could recognize different disease severity level of processing tomato early blight.The result show:the sensitive bands of different disease severity levels of processing tomato early blight is 628-643nm and 689-692nm.The sensitive bands are as the GA and SVM input vector.We get the best penalty parameters is 0.129 and kernel function parameters is 3.479.We make classification training and testing by polynomial nuclear,radial basis function nuclear,Sigmoid nuclear.The best classification model is the radial basis function nuclear of SVM. Training accuracy is 84.615%,Testing accuracy is 80.681%.It is combined GA and SVM to achieve multi-classification of processing tomato early blight.It is provided the technical support of prediction processing tomato early blight occurrence, development and diffusion rule in large areas.

  8. Prediction and early detection of delirium in the intensive care unit by using heart rate variability and machine learning.

    PubMed

    Oh, Jooyoung; Cho, Dongrae; Park, Jaesub; Na, Se Hee; Kim, Jongin; Heo, Jaeseok; Shin, Cheung Soo; Kim, Jae-Jin; Park, Jin Young; Lee, Boreom

    2018-03-27

    Delirium is an important syndrome found in patients in the intensive care unit (ICU), however, it is usually under-recognized during treatment. This study was performed to investigate whether delirious patients can be successfully distinguished from non-delirious patients by using heart rate variability (HRV) and machine learning. Electrocardiography data of 140 patients was acquired during daily ICU care, and HRV data were analyzed. Delirium, including its type, severity, and etiologies, was evaluated daily by trained psychiatrists. HRV data and various machine learning algorithms including linear support vector machine (SVM), SVM with radial basis function (RBF) kernels, linear extreme learning machine (ELM), ELM with RBF kernels, linear discriminant analysis, and quadratic discriminant analysis were utilized to distinguish delirium patients from non-delirium patients. HRV data of 4797 ECGs were included, and 39 patients had delirium at least once during their ICU stay. The maximum classification accuracy was acquired using SVM with RBF kernels. Our prediction method based on HRV with machine learning was comparable to previous delirium prediction models using massive amounts of clinical information. Our results show that autonomic alterations could be a significant feature of patients with delirium in the ICU, suggesting the potential for the automatic prediction and early detection of delirium based on HRV with machine learning.

  9. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    PubMed Central

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting peptide-protein binding affinities. The proposed approach is flexible and can be applied to predict any quantitative biological activity. Moreover, generating reliable peptide-protein binding affinities will also improve system biology modelling of interaction pathways. Lastly, the method should be of value to a large segment of the research community with the potential to accelerate the discovery of peptide-based drugs and facilitate vaccine development. The proposed kernel is freely available at http://graal.ift.ulaval.ca/downloads/gs-kernel/. PMID:23497081

  10. Assessing the blood volume and heart rate responses during haemodialysis in fluid overloaded patients using support vector regression.

    PubMed

    Javed, Faizan; Savkin, Andrey V; Chan, Gregory S H; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H

    2009-11-01

    This study aims to assess the blood volume and heart rate (HR) responses during haemodialysis in fluid overloaded patients by a nonparametric nonlinear regression approach based on a support vector machine (SVM). Relative blood volume (RBV) and electrocardiogram (ECG) was recorded from 23 haemodynamically stable renal failure patients during regular haemodialysis. Modelling was performed on 18 fluid overloaded patients (fluid removal of >2 L). SVM-based regression was used to obtain the models of RBV change with time as well as the percentage change in HR with respect to RBV. Mean squared error (MSE) and goodness of fit (R(2)) were used for comparison among different kernel functions. The design parameters were estimated using a grid search approach and the selected models were validated by a k-fold cross-validation technique. For the model of HR versus RBV change, a radial basis function (RBF) kernel (MSE = 17.37 and R(2) = 0.932) gave the least MSE compared to linear (MSE = 25.97 and R(2) = 0.898) and polynomial (MSE = 18.18 and R(2)= 0.929). The MSE was significantly lower for training data set when using RBF kernel compared to other kernels (p < 0.01). The RBF kernel also provided a slightly better fit of RBV change with time (MSE = 1.12 and R(2) = 0.91) compared to a linear kernel (MSE = 1.46 and R(2) = 0.88). The modelled HR response was characterized by an initial drop and a subsequent rise during progressive reduction in RBV, which may be interpreted as the reflex response to a transition from central hypervolaemia to hypovolaemia. These modelled curves can be used as references to a controller that can be designed to regulate the haemodynamic variables to ensure the stability of patients undergoing haemodialysis.

  11. Development of a Radial Deconsolidation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmreich, Grant W.; Montgomery, Fred C.; Hunn, John D.

    2015-12-01

    A series of experiments have been initiated to determine the retention or mobility of fission products* in AGR fuel compacts [Petti, et al. 2010]. This information is needed to refine fission product transport models. The AGR-3/4 irradiation test involved half-inch-long compacts that each contained twenty designed-to-fail (DTF) particles, with 20-μm thick carbon-coated kernels whose coatings were deliberately fabricated such that they would crack under irradiation, providing a known source of post-irradiation isotopes. The DTF particles in these compacts were axially distributed along the compact centerline so that the diffusion of fission products released from the DTF kernels would be radiallymore » symmetric [Hunn, et al. 2012; Hunn et al. 2011; Kercher, et al. 2011; Hunn, et al. 2007]. Compacts containing DTF particles were irradiated at Idaho National Laboratory (INL) at the Advanced Test Reactor (ATR) [Collin, 2015]. Analysis of the diffusion of these various post-irradiation isotopes through the compact requires a method to radially deconsolidate the compacts so that nested-annular volumes may be analyzed for post-irradiation isotope inventory in the compact matrix, TRISO outer pyrolytic carbon (OPyC), and DTF kernels. An effective radial deconsolidation method and apparatus appropriate to this application has been developed and parametrically characterized.« less

  12. Noninvasive prostate cancer screening based on serum surface-enhanced Raman spectroscopy and support vector machine

    NASA Astrophysics Data System (ADS)

    Li, Shaoxin; Zhang, Yanjiao; Xu, Junfa; Li, Linfang; Zeng, Qiuyao; Lin, Lin; Guo, Zhouyi; Liu, Zhiming; Xiong, Honglian; Liu, Songhao

    2014-09-01

    This study aims to present a noninvasive prostate cancer screening methods using serum surface-enhanced Raman scattering (SERS) and support vector machine (SVM) techniques through peripheral blood sample. SERS measurements are performed using serum samples from 93 prostate cancer patients and 68 healthy volunteers by silver nanoparticles. Three types of kernel functions including linear, polynomial, and Gaussian radial basis function (RBF) are employed to build SVM diagnostic models for classifying measured SERS spectra. For comparably evaluating the performance of SVM classification models, the standard multivariate statistic analysis method of principal component analysis (PCA) is also applied to classify the same datasets. The study results show that for the RBF kernel SVM diagnostic model, the diagnostic accuracy of 98.1% is acquired, which is superior to the results of 91.3% obtained from PCA methods. The receiver operating characteristic curve of diagnostic models further confirm above research results. This study demonstrates that label-free serum SERS analysis technique combined with SVM diagnostic algorithm has great potential for noninvasive prostate cancer screening.

  13. Predictive analysis of beer quality by correlating sensory evaluation with higher alcohol and ester production using multivariate statistics methods.

    PubMed

    Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru

    2014-10-15

    Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The influence of sub-grid scale motions on particle collision in homogeneous isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Li, Jing; Liu, Zhaohui; Zheng, Chuguang

    2018-02-01

    The absence of sub-grid scale (SGS) motions leads to severe errors in particle pair dynamics, which represents a great challenge to the large eddy simulation of particle-laden turbulent flow. In order to address this issue, data from direct numerical simulation (DNS) of homogenous isotropic turbulence coupled with Lagrangian particle tracking are used as a benchmark to evaluate the corresponding results of filtered DNS (FDNS). It is found that the filtering process in FDNS will lead to a non-monotonic variation of the particle collision statistics, including radial distribution function, radial relative velocity, and the collision kernel. The peak of radial distribution function shifts to the large-inertia region due to the lack of SGS motions, and the analysis of the local flowstructure characteristic variable at particle position indicates that the most effective interaction scale between particles and fluid eddies is increased in FDNS. Moreover, this scale shifting has an obvious effect on the odd-order moments of the probability density function of radial relative velocity, i.e. the skewness, which exhibits a strong correlation to the variance of radial distribution function in FDNS. As a whole, the radial distribution function, together with radial relative velocity, can compensate the SGS effects for the collision kernel in FDNS when the Stokes number based on the Kolmogorov time scale is greater than 3.0. However, it still leaves considerable errors for { St}_k <3.0.

  15. Reduced kernel recursive least squares algorithm for aero-engine degradation prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Haowen; Huang, Jinquan; Lu, Feng

    2017-10-01

    Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.

  16. 7 CFR 810.403 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... GRAIN United States Standards for Corn Principles Governing the Application of Standards § 810.403 Basis of determination. Each determination of class, damaged kernels, heat-damaged kernels, waxy corn, flint corn, and flint and dent corn is made on the basis of the grain after the removal of the broken...

  17. 7 CFR 810.403 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... GRAIN United States Standards for Corn Principles Governing the Application of Standards § 810.403 Basis of determination. Each determination of class, damaged kernels, heat-damaged kernels, waxy corn, flint corn, and flint and dent corn is made on the basis of the grain after the removal of the broken...

  18. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  19. Investigating light curve modulation via kernel smoothing. II. New additional modes in single-mode OGLE classical Cepheids

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Anderson, Richard I.

    2018-04-01

    Detailed knowledge of the variability of classical Cepheids, in particular their modulations and mode composition, provides crucial insight into stellar structure and pulsation. However, tiny modulations of the dominant radial-mode pulsation were recently found to be very frequent, possibly ubiquitous in Cepheids, which makes secondary modes difficult to detect and analyse, since these modulations can easily mask the potentially weak secondary modes. The aim of this study is to re-investigate the secondary mode content in the sample of OGLE-III and -IV single-mode classical Cepheids using kernel regression with adaptive kernel width for pre-whitening, instead of using a constant-parameter model. This leads to a more precise removal of the modulated dominant pulsation, and enables a more complete survey of secondary modes with frequencies outside a narrow range around the primary. Our analysis reveals that significant secondary modes occur more frequently among first overtone Cepheids than previously thought. The mode composition appears significantly different in the Large and Small Magellanic Clouds, suggesting a possible dependence on chemical composition. In addition to the formerly identified non-radial mode at P2 ≈ 0.6…0.65P1 (0.62-mode), and a cluster of modes with near-primary frequency, we find two more candidate non-radial modes. One is a numerous group of secondary modes with P2 ≈ 1.25P1, which may represent the fundamental of the 0.62-mode, supposed to be the first harmonic of an l ∈ {7, 8, 9} non-radial mode. The other new mode is at P2 ≈ 1.46P1, possibly analogous to a similar, rare mode recently discovered among first overtone RR Lyrae stars.

  20. Detection of subjects and brain regions related to Alzheimer's disease using 3D MRI scans based on eigenbrain and machine learning

    PubMed Central

    Zhang, Yudong; Dong, Zhengchao; Phillips, Preetha; Wang, Shuihua; Ji, Genlin; Yang, Jiquan; Yuan, Ti-Fei

    2015-01-01

    Purpose: Early diagnosis or detection of Alzheimer's disease (AD) from the normal elder control (NC) is very important. However, the computer-aided diagnosis (CAD) was not widely used, and the classification performance did not reach the standard of practical use. We proposed a novel CAD system for MR brain images based on eigenbrains and machine learning with two goals: accurate detection of both AD subjects and AD-related brain regions. Method: First, we used maximum inter-class variance (ICV) to select key slices from 3D volumetric data. Second, we generated an eigenbrain set for each subject. Third, the most important eigenbrain (MIE) was obtained by Welch's t-test (WTT). Finally, kernel support-vector-machines with different kernels that were trained by particle swarm optimization, were used to make an accurate prediction of AD subjects. Coefficients of MIE with values higher than 0.98 quantile were highlighted to obtain the discriminant regions that distinguish AD from NC. Results: The experiments showed that the proposed method can predict AD subjects with a competitive performance with existing methods, especially the accuracy of the polynomial kernel (92.36 ± 0.94) was better than the linear kernel of 91.47 ± 1.02 and the radial basis function (RBF) kernel of 86.71 ± 1.93. The proposed eigenbrain-based CAD system detected 30 AD-related brain regions (Anterior Cingulate, Caudate Nucleus, Cerebellum, Cingulate Gyrus, Claustrum, Inferior Frontal Gyrus, Inferior Parietal Lobule, Insula, Lateral Ventricle, Lentiform Nucleus, Lingual Gyrus, Medial Frontal Gyrus, Middle Frontal Gyrus, Middle Occipital Gyrus, Middle Temporal Gyrus, Paracentral Lobule, Parahippocampal Gyrus, Postcentral Gyrus, Posterial Cingulate, Precentral Gyrus, Precuneus, Subcallosal Gyrus, Sub-Gyral, Superior Frontal Gyrus, Superior Parietal Lobule, Superior Temporal Gyrus, Supramarginal Gyrus, Thalamus, Transverse Temporal Gyrus, and Uncus). The results were coherent with existing literatures. Conclusion: The eigenbrain method was effective in AD subject prediction and discriminant brain-region detection in MRI scanning. PMID:26082713

  1. 7 CFR 981.61 - Redetermination of kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  2. 7 CFR 868.253 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Brown Rice for Processing Principles... heat, heat-damaged kernels, parboiled kernels in nonparboiled rice, and the special grade Parboiled brown rice for processing shall be on the basis of the brown rice for processing after it has been...

  3. 7 CFR 868.253 - Basis of determination.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Brown Rice for Processing Principles... heat, heat-damaged kernels, parboiled kernels in nonparboiled rice, and the special grade Parboiled brown rice for processing shall be on the basis of the brown rice for processing after it has been...

  4. Multi-environment QTL analysis of grain morphology traits and fine mapping of a kernel-width QTL in Zheng58 × SK maize population.

    PubMed

    Raihan, Mohammad Sharif; Liu, Jie; Huang, Juan; Guo, Huan; Pan, Qingchun; Yan, Jianbing

    2016-08-01

    Sixteen major QTLs regulating maize kernel traits were mapped in multiple environments and one of them, qKW - 9.2 , was restricted to 630 Kb, harboring 28 putative gene models. To elucidate the genetic basis of kernel traits, a quantitative trait locus (QTL) analysis was conducted in a maize recombinant inbred line population derived from a cross between two diverse parents Zheng58 and SK, evaluated across eight environments. Construction of a high-density linkage map was based on 13,703 single-nucleotide polymorphism markers, covering 1860.9 cM of the whole genome. In total, 18, 26, 23, and 19 QTLs for kernel length, width, thickness, and 100-kernel weight, respectively, were detected on the basis of a single-environment analysis, and each QTL explained 3.2-23.7 % of the phenotypic variance. Sixteen major QTLs, which could explain greater than 10 % of the phenotypic variation, were mapped in multiple environments, implying that kernel traits might be controlled by many minor and multiple major QTLs. The major QTL qKW-9.2 with physical confidence interval of 1.68 Mbp, affecting kernel width, was then selected for fine mapping using heterogeneous inbred families. At final, the location of the underlying gene was narrowed down to 630 Kb, harboring 28 putative candidate-gene models. This information will enhance molecular breeding for kernel traits and simultaneously assist the gene cloning underlying this QTL, helping to reveal the genetic basis of kernel development in maize.

  5. SU-E-T-510: Calculation of High Resolution and Material-Specific Photon Energy Deposition Kernels.

    PubMed

    Huang, J; Childress, N; Kry, S

    2012-06-01

    To calculate photon energy deposition kernels (EDKs) used for convolution/superposition dose calculation at a higher resolution than the original Mackie et al. 1988 kernels and to calculate material-specific kernels that describe how energy is transported and deposited by secondary particles when the incident photon interacts in a material other than water. The high resolution EDKs for various incident photon energies were generated using the EGSnrc user-code EDKnrc, which forces incident photons to interact at the center of a 60 cm radius sphere of water. The simulation geometry is essentially the same as the original Mackie calculation but with a greater number of scoring voxels (48 radial, 144 angular bins). For the material-specific EDKs, incident photons were forced to interact at the center of a 1 mm radius sphere of material (lung, cortical bone, silver, or titanium) surrounded by a 60 cm radius water sphere, using the original scoring voxel geometry implemented by Mackie et al. 1988 (24 radial, 48 angular bins). Our Monte Carlo-calculated high resolution EDKs showed excellent agreement with the Mackie kernels, with our kernels providing more information about energy deposition close to the interaction site. Furthermore, our EDKs resulted in smoother dose deposition functions due to the finer resolution and greater number of simulation histories. The material-specific EDK results show that the angular distribution of energy deposition is different for incident photons interacting in different materials. Calculated from the angular dose distribution for 300 keV incident photons, the expected polar angle for dose deposition () is 28.6° for water, 33.3° for lung, 36.0° for cortical bone, 44.6° for titanium, and 58.1° for silver, showing a dependence on the material in which the primary photon interacts. These high resolution and material-specific EDKs have implications for convolution/superposition dose calculations in heterogeneous patient geometries, especially at material interfaces. © 2012 American Association of Physicists in Medicine.

  6. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  7. Topic Detection in Online Chat

    DTIC Science & Technology

    2009-09-01

    CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18 . SECURITY CLASSIFICATION OF THIS PAGE Unclassified 19. SECURITY CLASSIFICATION...Documents and Author-Author Documents—Radial Kernel. .............. 66 Figure 18 . Classifiers Results: LDA Models Created by Textbook-Author...Trained on Two Classes............................................................................................... 72 Table 18 . Maximum

  8. Characterization of non-diffusive transport in plasma turbulence by means of flux-gradient integro-differential kernels

    NASA Astrophysics Data System (ADS)

    Alcuson, J. A.; Reynolds-Barredo, J. M.; Mier, J. A.; Sanchez, Raul; Del-Castillo-Negrete, Diego; Newman, David E.; Tribaldos, V.

    2015-11-01

    A method to determine fractional transport exponents in systems dominated by fluid or plasma turbulence is proposed. The method is based on the estimation of the integro-differential kernel that relates values of the fluxes and gradients of the transported field, and its comparison with the family of analytical kernels of the linear fractional transport equation. Although use of this type of kernels has been explored before in this context, the methodology proposed here is rather unique since the connection with specific fractional equations is exploited from the start. The procedure has been designed to be particularly well-suited for application in experimental setups, taking advantage of the fact that kernel determination only requires temporal data of the transported field measured on an Eulerian grid. The simplicity and robustness of the method is tested first by using fabricated data from continuous-time random walk models built with prescribed transport characteristics. Its strengths are then illustrated on numerical Eulerian data gathered from simulations of a magnetically confined turbulent plasma in a near-critical regime, that is known to exhibit superdiffusive radial transport

  9. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  10. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  11. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  12. A radial basis function Galerkin method for inhomogeneous nonlocal diffusion

    DOE PAGES

    Lehoucq, Richard B.; Rowe, Stephen T.

    2016-02-01

    We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.

  13. Apollo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckingsal, David; Gamblin, Todd

    Modern performance portability frameworks provide application developers with a flexible way to determine how to run application kernels, however, they provide no guidance as to the best configuration for a given kernel. Apollo provides a model-generation framework that, when integrated with the RAJA library, uses lightweight decision tree models to select the fastest execution configuration on a per-kernel basis

  14. Unconventional Signal Processing Using the Cone Kernel Time-Frequency Representation.

    DTIC Science & Technology

    1992-10-30

    Wigner - Ville distribution ( WVD ), the Choi- Williams distribution , and the cone kernel distribution were compared with the spectrograms. Results were...ambiguity function. Figures A-18(c) and (d) are the Wigner - Ville Distribution ( WVD ) and CK-TFR Doppler maps. In this noiseless case all three exhibit...kernel is the basis for the well known Wigner - Ville distribution . In A-9(2), the cone kernel defined by Zhao, Atlas and Marks [21 is described

  15. Predictability of machine learning techniques to forecast the trends of market index prices: Hypothesis testing for the Korean stock markets.

    PubMed

    Pyo, Sujin; Lee, Jaewook; Cha, Mincheol; Jang, Huisu

    2017-01-01

    The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction.

  16. Customer demand prediction of service-oriented manufacturing using the least square support vector machine optimized by particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Jiang, Zhibin; Wang, Kangzhou

    2017-07-01

    Many nonlinear customer satisfaction-related factors significantly influence the future customer demand for service-oriented manufacturing (SOM). To address this issue and enhance the prediction accuracy, this article develops a novel customer demand prediction approach for SOM. The approach combines the phase space reconstruction (PSR) technique with the optimized least square support vector machine (LSSVM). First, the prediction sample space is reconstructed by the PSR to enrich the time-series dynamics of the limited data sample. Then, the generalization and learning ability of the LSSVM are improved by the hybrid polynomial and radial basis function kernel. Finally, the key parameters of the LSSVM are optimized by the particle swarm optimization algorithm. In a real case study, the customer demand prediction of an air conditioner compressor is implemented. Furthermore, the effectiveness and validity of the proposed approach are demonstrated by comparison with other classical predication approaches.

  17. Predictability of machine learning techniques to forecast the trends of market index prices: Hypothesis testing for the Korean stock markets

    PubMed Central

    Pyo, Sujin; Lee, Jaewook; Cha, Mincheol

    2017-01-01

    The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction. PMID:29136004

  18. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  19. Evaluation of gravitational curvatures of a tesseroid in spherical integral kernels

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-Le; Shen, Wen-Bin

    2018-04-01

    Proper understanding of how the Earth's mass distributions and redistributions influence the Earth's gravity field-related functionals is crucial for numerous applications in geodesy, geophysics and related geosciences. Calculations of the gravitational curvatures (GC) have been proposed in geodesy in recent years. In view of future satellite missions, the sixth-order developments of the gradients are becoming requisite. In this paper, a set of 3D integral GC formulas of a tesseroid mass body have been provided by spherical integral kernels in the spatial domain. Based on the Taylor series expansion approach, the numerical expressions of the 3D GC formulas are provided up to sixth order. Moreover, numerical experiments demonstrate the correctness of the 3D Taylor series approach for the GC formulas with order as high as sixth order. Analogous to other gravitational effects (e.g., gravitational potential, gravity vector, gravity gradient tensor), numerically it is found that there exist the very-near-area problem and polar singularity problem in the GC east-east-radial, north-north-radial and radial-radial-radial components in spatial domain, and compared to the other gravitational effects, the relative approximation errors of the GC components are larger due to not only the influence of the geocentric distance but also the influence of the latitude. This study shows that the magnitude of each term for the nonzero GC functionals by a grid resolution 15^' } } × 15^' }} at GOCE satellite height can reach of about 10^{-16} m^{-1} s2 for zero order, 10^{-24 } or 10^{-23} m^{-1} s2 for second order, 10^{-29} m^{-1} s2 for fourth order and 10^{-35} or 10^{-34} m^{-1} s2 for sixth order, respectively.

  20. Determining weight and moisture properties of sound and fusarium-damaged single wheat kernels by near infrared spectroscopy

    USDA-ARS?s Scientific Manuscript database

    Single kernel moisture content (MC) is important in the measurement of other quality traits in single kernels since many traits are expressed on a dry weight basis, and MC affects viability, storage quality, and price. Also, if near-infrared (NIR) spectroscopy is used to measure grain traits, the in...

  1. Modular Affective Reasoning-Based Versatile Introspective Architecture (MARVIN)

    DTIC Science & Technology

    2008-08-14

    monolithic kernels found in most mass market OSs, where these kinds of system processes run within the kernel , and thus need to be highly optimized as well as...without modifying pre- existing process management elements, we expect the process of transitioning this component from MINIX to monolithic kernels to...necessary to incorporate them into a monolithic kernel . To demonstrate how the APMM would work in practice, we used it as the basis for building a simulated

  2. Classifying Lower Extremity Muscle Fatigue during Walking using Machine Learning and Inertial Sensors

    PubMed Central

    Zhang, Jian; Lockhart, Thurmon E.; Soangra, Rahul

    2013-01-01

    Fatigue in lower extremity musculature is associated with decline in postural stability, motor performance and alters normal walking patterns in human subjects. Automated recognition of lower extremity muscle fatigue condition may be advantageous in early detection of fall and injury risks. Supervised machine learning methods such as Support Vector Machines (SVM) have been previously used for classifying healthy and pathological gait patterns and also for separating old and young gait patterns. In this study we explore the classification potential of SVM in recognition of gait patterns utilizing an inertial measurement unit associated with lower extremity muscular fatigue. Both kinematic and kinetic gait patterns of 17 participants (29±11 years) were recorded and analyzed in normal and fatigued state of walking. Lower extremities were fatigued by performance of a squatting exercise until the participants reached 60% of their baseline maximal voluntary exertion level. Feature selection methods were used to classify fatigue and no-fatigue conditions based on temporal and frequency information of the signals. Additionally, influences of three different kernel schemes (i.e., linear, polynomial, and radial basis function) were investigated for SVM classification. The results indicated that lower extremity muscle fatigue condition influenced gait and loading responses. In terms of the SVM classification results, an accuracy of 96% was reached in distinguishing the two gait patterns (fatigue and no-fatigue) within the same subject using the kinematic, time and frequency domain features. It is also found that linear kernel and RBF kernel were equally good to identify intra-individual fatigue characteristics. These results suggest that intra-subject fatigue classification using gait patterns from an inertial sensor holds considerable potential in identifying “at-risk” gait due to muscle fatigue. PMID:24081829

  3. A Cubic Radial Basis Function in the MLPG Method for Beam Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.

    2002-01-01

    A non-compactly supported cubic radial basis function implementation of the MLPG method for beam problems is presented. The evaluation of the derivatives of the shape functions obtained from the radial basis function interpolation is much simpler than the evaluation of the moving least squares shape function derivatives. The radial basis MLPG yields results as accurate or better than those obtained by the conventional MLPG method for problems with discontinuous and other complex loading conditions.

  4. An orthogonal oriented quadrature hexagonal image pyramid

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1987-01-01

    An image pyramid has been developed with basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The pyramid operates on a hexagonal sample lattice. The set of seven basis functions consist of three even high-pass kernels, three odd high-pass kernels, and one low-pass kernel. The three even kernels are identified when rotated by 60 or 120 deg, and likewise for the odd. The seven basis functions occupy a point and a hexagon of six nearest neighbors on a hexagonal sample lattice. At the lowest level of the pyramid, the input lattice is the image sample lattice. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing sq rt 7 larger than the previous level, so that the number of coefficients is reduced by a factor of 7 at each level. The relationship between this image code and the processing architecture of the primate visual cortex is discussed.

  5. CS-AMPPred: An Updated SVM Model for Antimicrobial Activity Prediction in Cysteine-Stabilized Peptides

    PubMed Central

    Porto, William F.; Pires, Állan S.; Franco, Octavio L.

    2012-01-01

    The antimicrobial peptides (AMP) have been proposed as an alternative to control resistant pathogens. However, due to multifunctional properties of several AMP classes, until now there has been no way to perform efficient AMP identification, except through in vitro and in vivo tests. Nevertheless, an indication of activity can be provided by prediction methods. In order to contribute to the AMP prediction field, the CS-AMPPred (Cysteine-Stabilized Antimicrobial Peptides Predictor) is presented here, consisting of an updated version of the Support Vector Machine (SVM) model for antimicrobial activity prediction in cysteine-stabilized peptides. The CS-AMPPred is based on five sequence descriptors: indexes of (i) α-helix and (ii) loop formation; and averages of (iii) net charge, (iv) hydrophobicity and (v) flexibility. CS-AMPPred was based on 310 cysteine-stabilized AMPs and 310 sequences extracted from PDB. The polynomial kernel achieves the best accuracy on 5-fold cross validation (85.81%), while the radial and linear kernels achieve 84.19%. Testing in a blind data set, the polynomial and radial kernels achieve an accuracy of 90.00%, while the linear model achieves 89.33%. The three models reach higher accuracies than previously described methods. A standalone version of CS-AMPPred is available for download at and runs on any Linux machine. PMID:23240023

  6. On one solution of Volterra integral equations of second kind

    NASA Astrophysics Data System (ADS)

    Myrhorod, V.; Hvozdeva, I.

    2016-10-01

    A solution of Volterra integral equations of the second kind with separable and difference kernels based on solutions of corresponding equations linking the kernel and resolvent is suggested. On the basis of a discrete functions class, the equations linking the kernel and resolvent are obtained and the methods of their analytical solutions are proposed. A mathematical model of the gas-turbine engine state modification processes in the form of Volterra integral equation of the second kind with separable kernel is offered.

  7. Super-resolution fusion of complementary panoramic images based on cross-selection kernel regression interpolation.

    PubMed

    Chen, Lidong; Basu, Anup; Zhang, Maojun; Wang, Wei; Liu, Yu

    2014-03-20

    A complementary catadioptric imaging technique was proposed to solve the problem of low and nonuniform resolution in omnidirectional imaging. To enhance this research, our paper focuses on how to generate a high-resolution panoramic image from the captured omnidirectional image. To avoid the interference between the inner and outer images while fusing the two complementary views, a cross-selection kernel regression method is proposed. First, in view of the complementarity of sampling resolution in the tangential and radial directions between the inner and the outer images, respectively, the horizontal gradients in the expected panoramic image are estimated based on the scattered neighboring pixels mapped from the outer, while the vertical gradients are estimated using the inner image. Then, the size and shape of the regression kernel are adaptively steered based on the local gradients. Furthermore, the neighboring pixels in the next interpolation step of kernel regression are also selected based on the comparison between the horizontal and vertical gradients. In simulation and real-image experiments, the proposed method outperforms existing kernel regression methods and our previous wavelet-based fusion method in terms of both visual quality and objective evaluation.

  8. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  9. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    PubMed

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  10. Some physical properties of ginkgo nuts and kernels

    NASA Astrophysics Data System (ADS)

    Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.

    2013-12-01

    Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.

  11. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  12. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Machine learning study for the prediction of transdermal peptide

    NASA Astrophysics Data System (ADS)

    Jung, Eunkyoung; Choi, Seung-Hoon; Lee, Nam Kyung; Kang, Sang-Kee; Choi, Yun-Jaie; Shin, Jae-Min; Choi, Kihang; Jung, Dong Hyun

    2011-04-01

    In order to develop a computational method to rapidly evaluate transdermal peptides, we report approaches for predicting the transdermal activity of peptides on the basis of peptide sequence information using Artificial Neural Network (ANN), Partial Least Squares (PLS) and Support Vector Machine (SVM). We identified 269 transdermal peptides by the phage display technique and use them as the positive controls to develop and test machine learning models. Combinations of three descriptors with neural network architectures, the number of latent variables and the kernel functions are tried in training to make appropriate predictions. The capacity of models is evaluated by means of statistical indicators including sensitivity, specificity, and the area under the receiver operating characteristic curve (ROC score). In the ROC score-based comparison, three methods proved capable of providing a reasonable prediction of transdermal peptide. The best result is obtained by SVM model with a radial basis function and VHSE descriptors. The results indicate that it is possible to discriminate between transdermal peptides and random sequences using our models. We anticipate that our models will be applicable to prediction of transdermal peptide for large peptide database for facilitating efficient transdermal drug delivery through intact skin.

  14. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    PubMed

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  15. [Discrimination of varieties of borneol using terahertz spectra based on principal component analysis and support vector machine].

    PubMed

    Li, Wu; Hu, Bing; Wang, Ming-wei

    2014-12-01

    In the present paper, the terahertz time-domain spectroscopy (THz-TDS) identification model of borneol based on principal component analysis (PCA) and support vector machine (SVM) was established. As one Chinese common agent, borneol needs a rapid, simple and accurate detection and identification method for its different source and being easily confused in the pharmaceutical and trade links. In order to assure the quality of borneol product and guard the consumer's right, quickly, efficiently and correctly identifying borneol has significant meaning to the production and transaction of borneol. Terahertz time-domain spectroscopy is a new spectroscopy approach to characterize material using terahertz pulse. The absorption terahertz spectra of blumea camphor, borneol camphor and synthetic borneol were measured in the range of 0.2 to 2 THz with the transmission THz-TDS. The PCA scores of 2D plots (PC1 X PC2) and 3D plots (PC1 X PC2 X PC3) of three kinds of borneol samples were obtained through PCA analysis, and both of them have good clustering effect on the 3 different kinds of borneol. The value matrix of the first 10 principal components (PCs) was used to replace the original spectrum data, and the 60 samples of the three kinds of borneol were trained and then the unknown 60 samples were identified. Four kinds of support vector machine model of different kernel functions were set up in this way. Results show that the accuracy of identification and classification of SVM RBF kernel function for three kinds of borneol is 100%, and we selected the SVM with the radial basis kernel function to establish the borneol identification model, in addition, in the noisy case, the classification accuracy rates of four SVM kernel function are above 85%, and this indicates that SVM has strong generalization ability. This study shows that PCA with SVM method of borneol terahertz spectroscopy has good classification and identification effects, and provides a new method for species identification of borneol in Chinese medicine.

  16. Kernel-aligned multi-view canonical correlation analysis for image recognition

    NASA Astrophysics Data System (ADS)

    Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao

    2016-09-01

    Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.

  17. Quantitative comparison of noise texture across CT scanners from different manufacturers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, Justin B.; Christianson, Olav; Samei, Ehsan

    2012-10-15

    Purpose: To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). Methods: The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625/0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section ofmore » the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. Results: The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm{sup 2} (0.002 mm{sup -1}) to 0.29 mm{sup 2} (0.74 mm{sup -1}). The GE kernels 'Soft,''Standard,''Chest,' and 'Lung' closely matched the Siemens kernels 'B35f,''B43f,''B41f,' and 'B80f' (RMSD < 0.05 mm{sup 2}, |PFD| < 0.02 mm{sup -1}, respectively). The GE 'Bone,''Bone+,' and 'Edge' kernels all matched most closely with Siemens 'B75f' kernel but with sizeable RMSD and |PFD| values up to 0.18 mm{sup 2} and 0.41 mm{sup -1}, respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. Conclusions: It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.« less

  18. Quantitative comparison of noise texture across CT scanners from different manufacturers.

    PubMed

    Solomon, Justin B; Christianson, Olav; Samei, Ehsan

    2012-10-01

    To quantitatively compare noise texture across computed tomography (CT) scanners from different manufacturers using the noise power spectrum (NPS). The American College of Radiology CT accreditation phantom (Gammex 464, Gammex, Inc., Middleton, WI) was imaged on two scanners: Discovery CT 750HD (GE Healthcare, Waukesha, WI), and SOMATOM Definition Flash (Siemens Healthcare, Germany), using a consistent acquisition protocol (120 kVp, 0.625∕0.6 mm slice thickness, 250 mAs, and 22 cm field of view). Images were reconstructed using filtered backprojection and a wide selection of reconstruction kernels. For each image set, the 2D NPS were estimated from the uniform section of the phantom. The 2D spectra were normalized by their integral value, radially averaged, and filtered by the human visual response function. A systematic kernel-by-kernel comparison across manufacturers was performed by computing the root mean square difference (RMSD) and the peak frequency difference (PFD) between the NPS from different kernels. GE and Siemens kernels were compared and kernel pairs that minimized the RMSD and |PFD| were identified. The RMSD (|PFD|) values between the NPS of GE and Siemens kernels varied from 0.01 mm(2) (0.002 mm(-1)) to 0.29 mm(2) (0.74 mm(-1)). The GE kernels "Soft," "Standard," "Chest," and "Lung" closely matched the Siemens kernels "B35f," "B43f," "B41f," and "B80f" (RMSD < 0.05 mm(2), |PFD| < 0.02 mm(-1), respectively). The GE "Bone," "Bone+," and "Edge" kernels all matched most closely with Siemens "B75f" kernel but with sizeable RMSD and |PFD| values up to 0.18 mm(2) and 0.41 mm(-1), respectively. These sizeable RMSD and |PFD| values corresponded to visually perceivable differences in the noise texture of the images. It is possible to use the NPS to quantitatively compare noise texture across CT systems. The degree to which similar texture across scanners could be achieved varies and is limited by the kernels available on each scanner.

  19. Local gravity field modeling using spherical radial basis functions and a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Mahbuby, Hany; Safari, Abdolreza; Foroughi, Ismael

    2017-05-01

    Spherical Radial Basis Functions (SRBFs) can express the local gravity field model of the Earth if they are parameterized optimally on or below the Bjerhammar sphere. This parameterization is generally defined as the shape of the base functions, their number, center locations, bandwidths, and scale coefficients. The number/location and bandwidths of the base functions are the most important parameters for accurately representing the gravity field; once they are determined, the scale coefficients can then be computed accordingly. In this study, the point-mass kernel, as the simplest shape of SRBFs, is chosen to evaluate the synthesized free-air gravity anomalies over the rough area in Auvergne and GNSS/Leveling points (synthetic height anomalies) are used to validate the results. A two-step automatic approach is proposed to determine the optimum distribution of the base functions. First, the location of the base functions and their bandwidths are found using the genetic algorithm; second, the conjugate gradient least squares method is employed to estimate the scale coefficients. The proposed methodology shows promising results. On the one hand, when using the genetic algorithm, the base functions do not need to be set to a regular grid and they can move according to the roughness of topography. In this way, the models meet the desired accuracy with a low number of base functions. On the other hand, the conjugate gradient method removes the bias between derived quasigeoid heights from the model and from the GNSS/leveling points; this means there is no need for a corrector surface. The numerical test on the area of interest revealed an RMS of 0.48 mGal for the differences between predicted and observed gravity anomalies, and a corresponding 9 cm for the differences in GNSS/leveling points.

  20. Sepsis mortality prediction with the Quotient Basis Kernel.

    PubMed

    Ribas Ripoll, Vicent J; Vellido, Alfredo; Romero, Enrique; Ruiz-Rodríguez, Juan Carlos

    2014-05-01

    This paper presents an algorithm to assess the risk of death in patients with sepsis. Sepsis is a common clinical syndrome in the intensive care unit (ICU) that can lead to severe sepsis, a severe state of septic shock or multi-organ failure. The proposed algorithm may be implemented as part of a clinical decision support system that can be used in combination with the scores deployed in the ICU to improve the accuracy, sensitivity and specificity of mortality prediction for patients with sepsis. In this paper, we used the Simplified Acute Physiology Score (SAPS) for ICU patients and the Sequential Organ Failure Assessment (SOFA) to build our kernels and algorithms. In the proposed method, we embed the available data in a suitable feature space and use algorithms based on linear algebra, geometry and statistics for inference. We present a simplified version of the Fisher kernel (practical Fisher kernel for multinomial distributions), as well as a novel kernel that we named the Quotient Basis Kernel (QBK). These kernels are used as the basis for mortality prediction using soft-margin support vector machines. The two new kernels presented are compared against other generative kernels based on the Jensen-Shannon metric (centred, exponential and inverse) and other widely used kernels (linear, polynomial and Gaussian). Clinical relevance is also evaluated by comparing these results with logistic regression and the standard clinical prediction method based on the initial SAPS score. As described in this paper, we tested the new methods via cross-validation with a cohort of 400 test patients. The results obtained using our methods compare favourably with those obtained using alternative kernels (80.18% accuracy for the QBK) and the standard clinical prediction method, which are based on the basal SAPS score or logistic regression (71.32% and 71.55%, respectively). The QBK presented a sensitivity and specificity of 79.34% and 83.24%, which outperformed the other kernels analysed, logistic regression and the standard clinical prediction method based on the basal SAPS score. Several scoring systems for patients with sepsis have been introduced and developed over the last 30 years. They allow for the assessment of the severity of disease and provide an estimate of in-hospital mortality. Physiology-based scoring systems are applied to critically ill patients and have a number of advantages over diagnosis-based systems. Severity score systems are often used to stratify critically ill patients for possible inclusion in clinical trials. In this paper, we present an effective algorithm that combines both scoring methodologies for the assessment of death in patients with sepsis that can be used to improve the sensitivity and specificity of the currently available methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Limits on radial differential rotation in Sun-like stars from parametric fits to oscillation power spectra

    NASA Astrophysics Data System (ADS)

    Nielsen, M. B.; Schunker, H.; Gizon, L.; Schou, J.; Ball, W. H.

    2017-06-01

    Context. Rotational shear in Sun-like stars is thought to be an important ingredient in models of stellar dynamos. Thanks to helioseismology, rotation in the Sun is characterized well, but the interior rotation profiles of other Sun-like stars are not so well constrained. Until recently, measurements of rotation in Sun-like stars have focused on the mean rotation, but little progress has been made on measuring or even placing limits on differential rotation. Aims: Using asteroseismic measurements of rotation we aim to constrain the radial shear in five Sun-like stars observed by the NASA Kepler mission: KIC 004914923, KIC 005184732, KIC 006116048, KIC 006933899, and KIC 010963065. Methods: We used stellar structure models for these five stars from previous works. These models provide the mass density, mode eigenfunctions, and the convection zone depth, which we used to compute the sensitivity kernels for the rotational frequency splitting of the modes. We used these kernels as weights in a parametric model of the stellar rotation profile of each star, where we allowed different rotation rates for the radiative interior and the convective envelope. This parametric model was incorporated into a fit to the oscillation power spectrum of each of the five Kepler stars. This fit included a prior on the rotation of the envelope, estimated from the rotation of surface magnetic activity measured from the photometric variability. Results: The asteroseismic measurements without the application of priors are unable to place meaningful limits on the radial shear. Using a prior on the envelope rotation enables us to constrain the interior rotation rate and thus the radial shear. In the five cases that we studied, the interior rotation rate does not differ from the envelope by more than approximately ± 30%. Uncertainties in the rotational splittings are too large to unambiguously determine the sign of the radial shear.

  2. Effect of finite sample size on feature selection and classification: a simulation study.

    PubMed

    Way, Ted W; Sahiner, Berkman; Hadjiiski, Lubomir M; Chan, Heang-Ping

    2010-02-01

    The small number of samples available for training and testing is often the limiting factor in finding the most effective features and designing an optimal computer-aided diagnosis (CAD) system. Training on a limited set of samples introduces bias and variance in the performance of a CAD system relative to that trained with an infinite sample size. In this work, the authors conducted a simulation study to evaluate the performances of various combinations of classifiers and feature selection techniques and their dependence on the class distribution, dimensionality, and the training sample size. The understanding of these relationships will facilitate development of effective CAD systems under the constraint of limited available samples. Three feature selection techniques, the stepwise feature selection (SFS), sequential floating forward search (SFFS), and principal component analysis (PCA), and two commonly used classifiers, Fisher's linear discriminant analysis (LDA) and support vector machine (SVM), were investigated. Samples were drawn from multidimensional feature spaces of multivariate Gaussian distributions with equal or unequal covariance matrices and unequal means, and with equal covariance matrices and unequal means estimated from a clinical data set. Classifier performance was quantified by the area under the receiver operating characteristic curve Az. The mean Az values obtained by resubstitution and hold-out methods were evaluated for training sample sizes ranging from 15 to 100 per class. The number of simulated features available for selection was chosen to be 50, 100, and 200. It was found that the relative performance of the different combinations of classifier and feature selection method depends on the feature space distributions, the dimensionality, and the available training sample sizes. The LDA and SVM with radial kernel performed similarly for most of the conditions evaluated in this study, although the SVM classifier showed a slightly higher hold-out performance than LDA for some conditions and vice versa for other conditions. PCA was comparable to or better than SFS and SFFS for LDA at small samples sizes, but inferior for SVM with polynomial kernel. For the class distributions simulated from clinical data, PCA did not show advantages over the other two feature selection methods. Under this condition, the SVM with radial kernel performed better than the LDA when few training samples were available, while LDA performed better when a large number of training samples were available. None of the investigated feature selection-classifier combinations provided consistently superior performance under the studied conditions for different sample sizes and feature space distributions. In general, the SFFS method was comparable to the SFS method while PCA may have an advantage for Gaussian feature spaces with unequal covariance matrices. The performance of the SVM with radial kernel was better than, or comparable to, that of the SVM with polynomial kernel under most conditions studied.

  3. Accurate Traffic Flow Prediction in Heterogeneous Vehicular Networks in an Intelligent Transport System Using a Supervised Non-Parametric Classifier.

    PubMed

    El-Sayed, Hesham; Sankar, Sharmi; Daraghmi, Yousef-Awwad; Tiwari, Prayag; Rattagan, Ekarat; Mohanty, Manoranjan; Puthal, Deepak; Prasad, Mukesh

    2018-05-24

    Heterogeneous vehicular networks (HETVNETs) evolve from vehicular ad hoc networks (VANETs), which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs). The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS) improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM) kernels with a radial basis function (RBF). The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.

  4. Application of support vector machine for the separation of mineralised zones in the Takht-e-Gonbad porphyry deposit, SE Iran

    NASA Astrophysics Data System (ADS)

    Mahvash Mohammadi, Neda; Hezarkhani, Ardeshir

    2018-07-01

    Classification of mineralised zones is an important factor for the analysis of economic deposits. In this paper, the support vector machine (SVM), a supervised learning algorithm, based on subsurface data is proposed for classification of mineralised zones in the Takht-e-Gonbad porphyry Cu-deposit (SE Iran). The effects of the input features are evaluated via calculating the accuracy rates on the SVM performance. Ultimately, the SVM model, is developed based on input features namely lithology, alteration, mineralisation, the level and, radial basis function (RBF) as a kernel function. Moreover, the optimal amount of parameters λ and C, using n-fold cross-validation method, are calculated at level 0.001 and 0.01 respectively. The accuracy of this model is 0.931 for classification of mineralised zones in the Takht-e-Gonbad porphyry deposit. The results of the study confirm the efficiency of SVM method for classification the mineralised zones.

  5. Detection of Life Threatening Ventricular Arrhythmia Using Digital Taylor Fourier Transform.

    PubMed

    Tripathy, Rajesh K; Zamora-Mendez, Alejandro; de la O Serna, José A; Paternina, Mario R Arrieta; Arrieta, Juan G; Naik, Ganesh R

    2018-01-01

    Accurate detection and classification of life-threatening ventricular arrhythmia episodes such as ventricular fibrillation (VF) and rapid ventricular tachycardia (VT) from electrocardiogram (ECG) is a challenging problem for patient monitoring and defibrillation therapy. This paper introduces a novel method for detection and classification of life-threatening ventricular arrhythmia episodes. The ECG signal is decomposed into various oscillatory modes using digital Taylor-Fourier transform (DTFT). The magnitude feature and a novel phase feature namely the phase difference (PD) are evaluated from the mode Taylor-Fourier coefficients of ECG signal. The least square support vector machine (LS-SVM) classifier with linear and radial basis function (RBF) kernels is employed for detection and classification of VT vs. VF, non-shock vs. shock and VF vs. non-VF arrhythmia episodes. The accuracy, sensitivity, and specificity values obtained using the proposed method are 89.81, 86.38, and 93.97%, respectively for the classification of Non-VF and VF episodes. Comparison with the performance of the state-of-the-art features demonstrate the advantages of the proposition.

  6. Detection of Life Threatening Ventricular Arrhythmia Using Digital Taylor Fourier Transform

    PubMed Central

    Tripathy, Rajesh K.; Zamora-Mendez, Alejandro; de la O Serna, José A.; Paternina, Mario R. Arrieta; Arrieta, Juan G.; Naik, Ganesh R.

    2018-01-01

    Accurate detection and classification of life-threatening ventricular arrhythmia episodes such as ventricular fibrillation (VF) and rapid ventricular tachycardia (VT) from electrocardiogram (ECG) is a challenging problem for patient monitoring and defibrillation therapy. This paper introduces a novel method for detection and classification of life-threatening ventricular arrhythmia episodes. The ECG signal is decomposed into various oscillatory modes using digital Taylor-Fourier transform (DTFT). The magnitude feature and a novel phase feature namely the phase difference (PD) are evaluated from the mode Taylor-Fourier coefficients of ECG signal. The least square support vector machine (LS-SVM) classifier with linear and radial basis function (RBF) kernels is employed for detection and classification of VT vs. VF, non-shock vs. shock and VF vs. non-VF arrhythmia episodes. The accuracy, sensitivity, and specificity values obtained using the proposed method are 89.81, 86.38, and 93.97%, respectively for the classification of Non-VF and VF episodes. Comparison with the performance of the state-of-the-art features demonstrate the advantages of the proposition.

  7. Automatic classification of sleep stages based on the time-frequency image of EEG signals.

    PubMed

    Bajaj, Varun; Pachori, Ram Bilas

    2013-12-01

    In this paper, a new method for automatic sleep stage classification based on time-frequency image (TFI) of electroencephalogram (EEG) signals is proposed. Automatic classification of sleep stages is an important part for diagnosis and treatment of sleep disorders. The smoothed pseudo Wigner-Ville distribution (SPWVD) based time-frequency representation (TFR) of EEG signal has been used to obtain the time-frequency image (TFI). The segmentation of TFI has been performed based on the frequency-bands of the rhythms of EEG signals. The features derived from the histogram of segmented TFI have been used as an input feature set to multiclass least squares support vector machines (MC-LS-SVM) together with the radial basis function (RBF), Mexican hat wavelet, and Morlet wavelet kernel functions for automatic classification of sleep stages from EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of sleep stages from EEG signals. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Primary Stability Recognition of the Newly Designed Cementless Femoral Stem Using Digital Signal Processing

    PubMed Central

    Salleh, Sh-Hussain; Hamedi, Mahyar; Zulkifly, Ahmad Hafiz; Lee, Muhammad Hisyam; Mohd Noor, Alias; Harris, Arief Ruhullah A.; Abdul Majid, Norazman

    2014-01-01

    Stress shielding and micromotion are two major issues which determine the success of newly designed cementless femoral stems. The correlation of experimental validation with finite element analysis (FEA) is commonly used to evaluate the stress distribution and fixation stability of the stem within the femoral canal. This paper focused on the applications of feature extraction and pattern recognition using support vector machine (SVM) to determine the primary stability of the implant. We measured strain with triaxial rosette at the metaphyseal region and micromotion with linear variable direct transducer proximally and distally using composite femora. The root mean squares technique is used to feed the classifier which provides maximum likelihood estimation of amplitude, and radial basis function is used as the kernel parameter which mapped the datasets into separable hyperplanes. The results showed 100% pattern recognition accuracy using SVM for both strain and micromotion. This indicates that DSP could be applied in determining the femoral stem primary stability with high pattern recognition accuracy in biomechanical testing. PMID:24800230

  9. Primary stability recognition of the newly designed cementless femoral stem using digital signal processing.

    PubMed

    Baharuddin, Mohd Yusof; Salleh, Sh-Hussain; Hamedi, Mahyar; Zulkifly, Ahmad Hafiz; Lee, Muhammad Hisyam; Mohd Noor, Alias; Harris, Arief Ruhullah A; Abdul Majid, Norazman

    2014-01-01

    Stress shielding and micromotion are two major issues which determine the success of newly designed cementless femoral stems. The correlation of experimental validation with finite element analysis (FEA) is commonly used to evaluate the stress distribution and fixation stability of the stem within the femoral canal. This paper focused on the applications of feature extraction and pattern recognition using support vector machine (SVM) to determine the primary stability of the implant. We measured strain with triaxial rosette at the metaphyseal region and micromotion with linear variable direct transducer proximally and distally using composite femora. The root mean squares technique is used to feed the classifier which provides maximum likelihood estimation of amplitude, and radial basis function is used as the kernel parameter which mapped the datasets into separable hyperplanes. The results showed 100% pattern recognition accuracy using SVM for both strain and micromotion. This indicates that DSP could be applied in determining the femoral stem primary stability with high pattern recognition accuracy in biomechanical testing.

  10. Pathological brain detection based on wavelet entropy and Hu moment invariants.

    PubMed

    Zhang, Yudong; Wang, Shuihua; Sun, Ping; Phillips, Preetha

    2015-01-01

    With the aim of developing an accurate pathological brain detection system, we proposed a novel automatic computer-aided diagnosis (CAD) to detect pathological brains from normal brains obtained by magnetic resonance imaging (MRI) scanning. The problem still remained a challenge for technicians and clinicians, since MR imaging generated an exceptionally large information dataset. A new two-step approach was proposed in this study. We used wavelet entropy (WE) and Hu moment invariants (HMI) for feature extraction, and the generalized eigenvalue proximal support vector machine (GEPSVM) for classification. To further enhance classification accuracy, the popular radial basis function (RBF) kernel was employed. The 10 runs of k-fold stratified cross validation result showed that the proposed "WE + HMI + GEPSVM + RBF" method was superior to existing methods w.r.t. classification accuracy. It obtained the average classification accuracies of 100%, 100%, and 99.45% over Dataset-66, Dataset-160, and Dataset-255, respectively. The proposed method is effective and can be applied to realistic use.

  11. Comparison Between Linear and Non-parametric Regression Models for Genome-Enabled Prediction in Wheat

    PubMed Central

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-01-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models. PMID:23275882

  12. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    PubMed

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  13. Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne

    2004-01-01

    Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.

  14. Integrating support vector machines and random forests to classify crops in time series of Worldview-2 images

    NASA Astrophysics Data System (ADS)

    Zafari, A.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.

    2017-10-01

    Crop maps are essential inputs for the agricultural planning done at various governmental and agribusinesses agencies. Remote sensing offers timely and costs efficient technologies to identify and map crop types over large areas. Among the plethora of classification methods, Support Vector Machine (SVM) and Random Forest (RF) are widely used because of their proven performance. In this work, we study the synergic use of both methods by introducing a random forest kernel (RFK) in an SVM classifier. A time series of multispectral WorldView-2 images acquired over Mali (West Africa) in 2014 was used to develop our case study. Ground truth containing five common crop classes (cotton, maize, millet, peanut, and sorghum) were collected at 45 farms and used to train and test the classifiers. An SVM with the standard Radial Basis Function (RBF) kernel, a RF, and an SVM-RFK were trained and tested over 10 random training and test subsets generated from the ground data. Results show that the newly proposed SVM-RFK classifier can compete with both RF and SVM-RBF. The overall accuracies based on the spectral bands only are of 83, 82 and 83% respectively. Adding vegetation indices to the analysis result in the classification accuracy of 82, 81 and 84% for SVM-RFK, RF, and SVM-RBF respectively. Overall, it can be observed that the newly tested RFK can compete with SVM-RBF and RF classifiers in terms of classification accuracy.

  15. Decision support system for diabetic retinopathy using discrete wavelet transform.

    PubMed

    Noronha, K; Acharya, U R; Nayak, K P; Kamath, S; Bhandary, S V

    2013-03-01

    Prolonged duration of the diabetes may affect the tiny blood vessels of the retina causing diabetic retinopathy. Routine eye screening of patients with diabetes helps to detect diabetic retinopathy at the early stage. It is very laborious and time-consuming for the doctors to go through many fundus images continuously. Therefore, decision support system for diabetic retinopathy detection can reduce the burden of the ophthalmologists. In this work, we have used discrete wavelet transform and support vector machine classifier for automated detection of normal and diabetic retinopathy classes. The wavelet-based decomposition was performed up to the second level, and eight energy features were extracted. Two energy features from the approximation coefficients of two levels and six energy values from the details in three orientations (horizontal, vertical and diagonal) were evaluated. These features were fed to the support vector machine classifier with various kernel functions (linear, radial basis function, polynomial of orders 2 and 3) to evaluate the highest classification accuracy. We obtained the highest average classification accuracy, sensitivity and specificity of more than 99% with support vector machine classifier (polynomial kernel of order 3) using three discrete wavelet transform features. We have also proposed an integrated index called Diabetic Retinopathy Risk Index using clinically significant wavelet energy features to identify normal and diabetic retinopathy classes using just one number. We believe that this (Diabetic Retinopathy Risk Index) can be used as an adjunct tool by the doctors during the eye screening to cross-check their diagnosis.

  16. Feature extraction via KPCA for classification of gait patterns.

    PubMed

    Wu, Jianning; Wang, Jue; Liu, Li

    2007-06-01

    Automated recognition of gait pattern change is important in medical diagnostics as well as in the early identification of at-risk gait in the elderly. We evaluated the use of Kernel-based Principal Component Analysis (KPCA) to extract more gait features (i.e., to obtain more significant amounts of information about human movement) and thus to improve the classification of gait patterns. 3D gait data of 24 young and 24 elderly participants were acquired using an OPTOTRAK 3020 motion analysis system during normal walking, and a total of 36 gait spatio-temporal and kinematic variables were extracted from the recorded data. KPCA was used first for nonlinear feature extraction to then evaluate its effect on a subsequent classification in combination with learning algorithms such as support vector machines (SVMs). Cross-validation test results indicated that the proposed technique could allow spreading the information about the gait's kinematic structure into more nonlinear principal components, thus providing additional discriminatory information for the improvement of gait classification performance. The feature extraction ability of KPCA was affected slightly with different kernel functions as polynomial and radial basis function. The combination of KPCA and SVM could identify young-elderly gait patterns with 91% accuracy, resulting in a markedly improved performance compared to the combination of PCA and SVM. These results suggest that nonlinear feature extraction by KPCA improves the classification of young-elderly gait patterns, and holds considerable potential for future applications in direct dimensionality reduction and interpretation of multiple gait signals.

  17. An intelligent classifier for prognosis of cardiac resynchronization therapy based on speckle-tracking echocardiograms.

    PubMed

    Chao, Pei-Kuang; Wang, Chun-Li; Chan, Hsiao-Lung

    2012-03-01

    Predicting response after cardiac resynchronization therapy (CRT) has been a challenge of cardiologists. About 30% of selected patients based on the standard selection criteria for CRT do not show response after receiving the treatment. This study is aimed to build an intelligent classifier to assist in identifying potential CRT responders by speckle-tracking radial strain based on echocardiograms. The echocardiograms analyzed were acquired before CRT from 26 patients who have received CRT. Sequential forward selection was performed on the parameters obtained by peak-strain timing and phase space reconstruction on speckle-tracking radial strain to find an optimal set of features for creating intelligent classifiers. Support vector machine (SVM) with a linear, quadratic, and polynominal kernel were tested to build classifiers to identify potential responders and non-responders for CRT by selected features. Based on random sub-sampling validation, the best classification performance is correct rate about 95% with 96-97% sensitivity and 93-94% specificity achieved by applying SVM with a quadratic kernel on a set of 3 parameters. The selected 3 parameters contain both indexes extracted by peak-strain timing and phase space reconstruction. An intelligent classifier with an averaged correct rate, sensitivity and specificity above 90% for assisting in identifying CRT responders is built by speckle-tracking radial strain. The classifier can be applied to provide objective suggestion for patient selection of CRT. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Fission Product Release and Survivability of UN-Kernel LWR TRISO Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besmann, Theodore M; Ferber, Mattison K; Lin, Hua-Tay

    2014-01-01

    A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from range calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 m diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated with a TRISO particle as a function of fluence. Creep and swelling of the innermore » and outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by measuring the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers as a function of fluence. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less

  19. Fission product release and survivability of UN-kernel LWR TRISO fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    T. M. Besmann; M. K. Ferber; H.-T. Lin

    2014-05-01

    A thermomechanical assessment of the LWR application of TRISO fuel with UN kernels was performed. Fission product release under operational and transient temperature conditions was determined by extrapolation from fission product recoil calculations and limited data from irradiated UN pellets. Both fission recoil and diffusive release were considered and internal particle pressures computed for both 650 and 800 um diameter kernels as a function of buffer layer thickness. These pressures were used in conjunction with a finite element program to compute the radial and tangential stresses generated within a TRISO particle undergoing burnup. Creep and swelling of the inner andmore » outer pyrolytic carbon layers were included in the analyses. A measure of reliability of the TRISO particle was obtained by computing the probability of survival of the SiC barrier layer and the maximum tensile stress generated in the pyrolytic carbon layers from internal pressure and thermomechanics of the layers. These reliability estimates were obtained as functions of the kernel diameter, buffer layer thickness, and pyrolytic carbon layer thickness. The value of the probability of survival at the end of irradiation was inversely proportional to the maximum pressure.« less

  20. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    PubMed Central

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  1. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  2. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  3. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    PubMed

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to detect the dynamic muscle fatigue conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Influence of host diversity on development of epidemics: an evaluation and elaboration of mixture theory.

    PubMed

    Skelsey, P; Rossing, W A H; Kessel, G J T; Powell, J; van der Werf, W

    2005-04-01

    ABSTRACT A spatiotemporal/integro-difference equation model was developed and utilized to study the progress of epidemics in spatially heterogeneous mixtures of susceptible and resistant host plants. The effects of different scales and patterns of host genotypes on the development of focal and general epidemics were investigated using potato late blight as a case study. Two different radial Laplace kernels and a two-dimensional Gaussian kernel were used for modeling the dispersal of spores. An analytical expression for the apparent infection rate, r, in general epidemics was tested by comparison with dynamic simulations. A genotype connectivity parameter, q, was introduced into the formula for r. This parameter quantifies the probability of pathogen inoculum produced on a certain host genotype unit reaching the same or another unit of the same genotype. The analytical expression for the apparent infection rate provided accurate predictions of realized r in the simulations of general epidemics. The relationship between r and the radial velocity of focus expansion, c, in focal epidemics, was linear in accordance with theory for homogeneous genotype mixtures. The findings suggest that genotype mixtures that are effective in reducing general epidemics of Phytophthora infestans will likewise curtail focal epidemics and vice versa.

  5. Radial basis function neural networks applied to NASA SSME data

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Dhawan, Atam P.

    1993-01-01

    This paper presents a brief report on the application of Radial Basis Function Neural Networks (RBFNN) to the prediction of sensor values for fault detection and diagnosis of the Space Shuttle's Main Engines (SSME). The location of the Radial Basis Function (RBF) node centers was determined with a K-means clustering algorithm. A neighborhood operation about these center points was used to determine the variances of the individual processing notes.

  6. Semantic Classical Conditioning and Brain-Computer Interface Control: Encoding of Affirmative and Negative Thinking

    PubMed Central

    Ruf, Carolin A.; De Massari, Daniele; Furdea, Adrian; Matuz, Tamara; Fioravanti, Chiara; van der Heiden, Linda; Halder, Sebastian; Birbaumer, Niels

    2013-01-01

    The aim of the study was to investigate conditioned electroencephalography (EEG) responses to factually correct and incorrect statements in order to enable binary communication by means of a brain-computer interface (BCI). In two experiments with healthy participants true and false statements (serving as conditioned stimuli, CSs) were paired with two different tones which served as unconditioned stimuli (USs). The features of the USs were varied and tested for their effectiveness to elicit differentiable conditioned reactions (CRs). After acquisition of the CRs, these CRs to true and false statements were classified offline using a radial basis function kernel support vector machine. A mean single-trial classification accuracy of 50.5% was achieved for differentiating conditioned “yes” versus “no” thinking and mean accuracies of 65.4% for classification of “yes” and 68.8% for “no” thinking (both relative to baseline) were found using the best US. Analysis of the area under the curve of the conditioned EEG responses revealed significant differences between conditioned “yes” and “no” answers. Even though improvements are necessary, these first results indicate that the semantic conditioning paradigm could be a useful basis for further research regarding BCI communication in patients in the complete locked-in state. PMID:23471568

  7. Scramjet Nozzles

    DTIC Science & Technology

    2010-09-01

    and y, the axial and radial coordinates respectively. Point c lies somewhere within the mesh generated by the initial expansion (the kernel). All that...and the surface will be subjected to high heat loads restricting the choice of suitable materials. Material choice has direct implications for...Some legacy trajectory codes might not be able to deal with anything other than axial forces from engines, reflecting the class of problem they were

  8. Free vibrations and buckling analysis of laminated plates by oscillatory radial basis functions

    NASA Astrophysics Data System (ADS)

    Neves, A. M. A.; Ferreira, A. J. M.

    2015-12-01

    In this paper the free vibrations and buckling analysis of laminated plates is performed using a global meshless method. A refined version of Kant's theorie which accounts for transverse normal stress and through-the-thickness deformation is used. The innovation is the use of oscillatory radial basis functions. Numerical examples are performed and results are presented and compared to available references. Such functions proved to be an alternative to the tradicional nonoscillatory radial basis functions.

  9. Pricing and simulation for real estate index options: Radial basis point interpolation

    NASA Astrophysics Data System (ADS)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

  10. Automation of peanut drying with a sensor network including an in-shell kernel moisture sensor

    USDA-ARS?s Scientific Manuscript database

    Peanut drying is an essential task in the processing and handling of peanuts. Peanuts leave the fields with kernel moisture contents > 20% wet basis and need to be dried to < 10.5% w.b. for grading and storage purposes. Current peanut drying processes utilize decision support software based on model...

  11. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    PubMed Central

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  12. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  13. Meshless Solution of the Problem on the Static Behavior of Thin and Thick Laminated Composite Beams

    NASA Astrophysics Data System (ADS)

    Xiang, S.; Kang, G. W.

    2018-03-01

    For the first time, the static behavior of laminated composite beams is analyzed using the meshless collocation method based on a thin-plate-spline radial basis function. In the approximation of a partial differential equation by using a radial basis function, the shape parameter has an important role in ensuring the numerical accuracy. The choice of a shape parameter in the thin plate spline radial basis function is easier than in other radial basis functions. The governing differential equations are derived based on Reddy's third-order shear deformation theory. Numerical results are obtained for symmetric cross-ply laminated composite beams with simple-simple and cantilever boundary conditions under a uniform load. The results found are compared with available published ones and demonstrate the accuracy of the present method.

  14. Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination.

    PubMed

    Sørensen, Lauge; Nielsen, Mads

    2018-05-15

    The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Comparative investigation of vibration and current monitoring for prediction of mechanical and electrical faults in induction motor based on multiclass-support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Gangsar, Purushottam; Tiwari, Rajiv

    2017-09-01

    This paper presents an investigation of vibration and current monitoring for effective fault prediction in induction motor (IM) by using multiclass support vector machine (MSVM) algorithms. Failures of IM may occur due to propagation of a mechanical or electrical fault. Hence, for timely detection of these faults, the vibration as well as current signals was acquired after multiple experiments of varying speeds and external torques from an experimental test rig. Here, total ten different fault conditions that frequently encountered in IM (four mechanical fault, five electrical fault conditions and one no defect condition) have been considered. In the case of stator winding fault, and phase unbalance and single phasing fault, different level of severity were also considered for the prediction. In this study, the identification has been performed of the mechanical and electrical faults, individually and collectively. Fault predictions have been performed using vibration signal alone, current signal alone and vibration-current signal concurrently. The one-versus-one MSVM has been trained at various operating conditions of IM using the radial basis function (RBF) kernel and tested for same conditions, which gives the result in the form of percentage fault prediction. The prediction performance is investigated for the wide range of RBF kernel parameter, i.e. gamma, and selected the best result for one optimal value of gamma for each case. Fault predictions has been performed and investigated for the wide range of operational speeds of the IM as well as external torques on the IM.

  16. Classification of brain tumours using short echo time 1H MR spectra

    NASA Astrophysics Data System (ADS)

    Devos, A.; Lukas, L.; Suykens, J. A. K.; Vanhamme, L.; Tate, A. R.; Howe, F. A.; Majós, C.; Moreno-Torres, A.; van der Graaf, M.; Arús, C.; Van Huffel, S.

    2004-09-01

    The purpose was to objectively compare the application of several techniques and the use of several input features for brain tumour classification using Magnetic Resonance Spectroscopy (MRS). Short echo time 1H MRS signals from patients with glioblastomas ( n = 87), meningiomas ( n = 57), metastases ( n = 39), and astrocytomas grade II ( n = 22) were provided by six centres in the European Union funded INTERPRET project. Linear discriminant analysis, least squares support vector machines (LS-SVM) with a linear kernel and LS-SVM with radial basis function kernel were applied and evaluated over 100 stratified random splittings of the dataset into training and test sets. The area under the receiver operating characteristic curve (AUC) was used to measure the performance of binary classifiers, while the percentage of correct classifications was used to evaluate the multiclass classifiers. The influence of several factors on the classification performance has been tested: L2- vs. water normalization, magnitude vs. real spectra and baseline correction. The effect of input feature reduction was also investigated by using only the selected frequency regions containing the most discriminatory information, and peak integrated values. Using L2-normalized complete spectra the automated binary classifiers reached a mean test AUC of more than 0.95, except for glioblastomas vs. metastases. Similar results were obtained for all classification techniques and input features except for water normalized spectra, where classification performance was lower. This indicates that data acquisition and processing can be simplified for classification purposes, excluding the need for separate water signal acquisition, baseline correction or phasing.

  17. Satisfiability of logic programming based on radial basis function neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged

    2014-07-10

    In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We appliedmore » the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.« less

  18. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  19. Estimating top-of-atmosphere thermal infrared radiance using MERRA-2 atmospheric data

    NASA Astrophysics Data System (ADS)

    Kleynhans, Tania; Montanaro, Matthew; Gerace, Aaron; Kanan, Christopher

    2017-05-01

    Thermal infrared satellite images have been widely used in environmental studies. However, satellites have limited temporal resolution, e.g., 16 day Landsat or 1 to 2 day Terra MODIS. This paper investigates the use of the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product, produced by NASA's Global Modeling and Assimilation Office (GMAO) to predict global topof-atmosphere (TOA) thermal infrared radiance. The high temporal resolution of the MERRA-2 data product presents opportunities for novel research and applications. Various methods were applied to estimate TOA radiance from MERRA-2 variables namely (1) a parameterized physics based method, (2) Linear regression models and (3) non-linear Support Vector Regression. Model prediction accuracy was evaluated using temporally and spatially coincident Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared data as reference data. This research found that Support Vector Regression with a radial basis function kernel produced the lowest error rates. Sources of errors are discussed and defined. Further research is currently being conducted to train deep learning models to predict TOA thermal radiance

  20. Automated detection of heart ailments from 12-lead ECG using complex wavelet sub-band bi-spectrum features.

    PubMed

    Tripathy, Rajesh Kumar; Dandapat, Samarendra

    2017-04-01

    The complex wavelet sub-band bi-spectrum (CWSB) features are proposed for detection and classification of myocardial infarction (MI), heart muscle disease (HMD) and bundle branch block (BBB) from 12-lead ECG. The dual tree CW transform of 12-lead ECG produces CW coefficients at different sub-bands. The higher-order CW analysis is used for evaluation of CWSB. The mean of the absolute value of CWSB, and the number of negative phase angle and the number of positive phase angle features from the phase of CWSB of 12-lead ECG are evaluated. Extreme learning machine and support vector machine (SVM) classifiers are used to evaluate the performance of CWSB features. Experimental results show that the proposed CWSB features of 12-lead ECG and the SVM classifier are successful for classification of various heart pathologies. The individual accuracy values for MI, HMD and BBB classes are obtained as 98.37, 97.39 and 96.40%, respectively, using SVM classifier and radial basis function kernel function. A comparison has also been made with existing 12-lead ECG-based cardiac disease detection techniques.

  1. Gait recognition based on Gabor wavelets and modified gait energy image for human identification

    NASA Astrophysics Data System (ADS)

    Huang, Deng-Yuan; Lin, Ta-Wei; Hu, Wu-Chih; Cheng, Chih-Hsiang

    2013-10-01

    This paper proposes a method for recognizing human identity using gait features based on Gabor wavelets and modified gait energy images (GEIs). Identity recognition by gait generally involves gait representation, extraction, and classification. In this work, a modified GEI convolved with an ensemble of Gabor wavelets is proposed as a gait feature. Principal component analysis is then used to project the Gabor-wavelet-based gait features into a lower-dimension feature space for subsequent classification. Finally, support vector machine classifiers based on a radial basis function kernel are trained and utilized to recognize human identity. The major contributions of this paper are as follows: (1) the consideration of the shadow effect to yield a more complete segmentation of gait silhouettes; (2) the utilization of motion estimation to track people when walkers overlap; and (3) the derivation of modified GEIs to extract more useful gait information. Extensive performance evaluation shows a great improvement of recognition accuracy due to the use of shadow removal, motion estimation, and gait representation using the modified GEIs and Gabor wavelets.

  2. Automated Diagnosis of Glaucoma Using Empirical Wavelet Transform and Correntropy Features Extracted From Fundus Images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Acharya, U Rajendra

    2017-05-01

    Glaucoma is an ocular disorder caused due to increased fluid pressure in the optic nerve. It damages the optic nerve and subsequently causes loss of vision. The available scanning methods are Heidelberg retinal tomography, scanning laser polarimetry, and optical coherence tomography. These methods are expensive and require experienced clinicians to use them. So, there is a need to diagnose glaucoma accurately with low cost. Hence, in this paper, we have presented a new methodology for an automated diagnosis of glaucoma using digital fundus images based on empirical wavelet transform (EWT). The EWT is used to decompose the image, and correntropy features are obtained from decomposed EWT components. These extracted features are ranked based on t value feature selection algorithm. Then, these features are used for the classification of normal and glaucoma images using least-squares support vector machine (LS-SVM) classifier. The LS-SVM is employed for classification with radial basis function, Morlet wavelet, and Mexican-hat wavelet kernels. The classification accuracy of the proposed method is 98.33% and 96.67% using threefold and tenfold cross validation, respectively.

  3. Statistical classification of drug incidents due to look-alike sound-alike mix-ups.

    PubMed

    Wong, Zoie Shui Yee

    2016-06-01

    It has been recognised that medication names that look or sound similar are a cause of medication errors. This study builds statistical classifiers for identifying medication incidents due to look-alike sound-alike mix-ups. A total of 227 patient safety incident advisories related to medication were obtained from the Canadian Patient Safety Institute's Global Patient Safety Alerts system. Eight feature selection strategies based on frequent terms, frequent drug terms and constituent terms were performed. Statistical text classifiers based on logistic regression, support vector machines with linear, polynomial, radial-basis and sigmoid kernels and decision tree were trained and tested. The models developed achieved an average accuracy of above 0.8 across all the model settings. The receiver operating characteristic curves indicated the classifiers performed reasonably well. The results obtained in this study suggest that statistical text classification can be a feasible method for identifying medication incidents due to look-alike sound-alike mix-ups based on a database of advisories from Global Patient Safety Alerts. © The Author(s) 2014.

  4. Fractional Snow Cover Mapping by Artificial Neural Networks and Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Çiftçi, B. B.; Kuter, S.; Akyürek, Z.; Weber, G.-W.

    2017-11-01

    Snow is an important land cover whose distribution over space and time plays a significant role in various environmental processes. Hence, snow cover mapping with high accuracy is necessary to have a real understanding for present and future climate, water cycle, and ecological changes. This study aims to investigate and compare the design and use of artificial neural networks (ANNs) and support vector machines (SVMs) algorithms for fractional snow cover (FSC) mapping from satellite data. ANN and SVM models with different model building settings are trained by using Moderate Resolution Imaging Spectroradiometer surface reflectance values of bands 1-7, normalized difference snow index and normalized difference vegetation index as predictor variables. Reference FSC maps are generated from higher spatial resolution Landsat ETM+ binary snow cover maps. Results on the independent test data set indicate that the developed ANN model with hyperbolic tangent transfer function in the output layer and the SVM model with radial basis function kernel produce high FSC mapping accuracies with the corresponding values of R = 0.93 and R = 0.92, respectively.

  5. Improving medical diagnosis reliability using Boosted C5.0 decision tree empowered by Particle Swarm Optimization.

    PubMed

    Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin

    2015-08-01

    Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.

  6. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  7. Evaluation of modulation transfer function of optical lens system by support vector regression methodologies - A comparative study

    NASA Astrophysics Data System (ADS)

    Petković, Dalibor; Shamshirband, Shahaboddin; Saboohi, Hadi; Ang, Tan Fong; Anuar, Nor Badrul; Rahman, Zulkanain Abdul; Pavlović, Nenad T.

    2014-07-01

    The quantitative assessment of image quality is an important consideration in any type of imaging system. The modulation transfer function (MTF) is a graphical description of the sharpness and contrast of an imaging system or of its individual components. The MTF is also known and spatial frequency response. The MTF curve has different meanings according to the corresponding frequency. The MTF of an optical system specifies the contrast transmitted by the system as a function of image size, and is determined by the inherent optical properties of the system. In this study, the polynomial and radial basis function (RBF) are applied as the kernel function of Support Vector Regression (SVR) to estimate and predict estimate MTF value of the actual optical system according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf attempt to minimize the generalization error bound so as to achieve generalized performance. The experimental results show that an improvement in predictive accuracy and capability of generalization can be achieved by the SVR_rbf approach in compare to SVR_poly soft computing methodology.

  8. Developing a radiomics framework for classifying non-small cell lung carcinoma subtypes

    NASA Astrophysics Data System (ADS)

    Yu, Dongdong; Zang, Yali; Dong, Di; Zhou, Mu; Gevaert, Olivier; Fang, Mengjie; Shi, Jingyun; Tian, Jie

    2017-03-01

    Patient-targeted treatment of non-small cell lung carcinoma (NSCLC) has been well documented according to the histologic subtypes over the past decade. In parallel, recent development of quantitative image biomarkers has recently been highlighted as important diagnostic tools to facilitate histological subtype classification. In this study, we present a radiomics analysis that classifies the adenocarcinoma (ADC) and squamous cell carcinoma (SqCC). We extract 52-dimensional, CT-based features (7 statistical features and 45 image texture features) to represent each nodule. We evaluate our approach on a clinical dataset including 324 ADCs and 110 SqCCs patients with CT image scans. Classification of these features is performed with four different machine-learning classifiers including Support Vector Machines with Radial Basis Function kernel (RBF-SVM), Random forest (RF), K-nearest neighbor (KNN), and RUSBoost algorithms. To improve the classifiers' performance, optimal feature subset is selected from the original feature set by using an iterative forward inclusion and backward eliminating algorithm. Extensive experimental results demonstrate that radiomics features achieve encouraging classification results on both complete feature set (AUC=0.89) and optimal feature subset (AUC=0.91).

  9. Automated diagnosis of epilepsy using CWT, HOS and texture parameters.

    PubMed

    Acharya, U Rajendra; Yanti, Ratna; Zheng, Jia Wei; Krishnan, M Muthu Rama; Tan, Jen Hong; Martis, Roshan Joy; Lim, Choo Min

    2013-06-01

    Epilepsy is a chronic brain disorder which manifests as recurrent seizures. Electroencephalogram (EEG) signals are generally analyzed to study the characteristics of epileptic seizures. In this work, we propose a method for the automated classification of EEG signals into normal, interictal and ictal classes using Continuous Wavelet Transform (CWT), Higher Order Spectra (HOS) and textures. First the CWT plot was obtained for the EEG signals and then the HOS and texture features were extracted from these plots. Then the statistically significant features were fed to four classifiers namely Decision Tree (DT), K-Nearest Neighbor (KNN), Probabilistic Neural Network (PNN) and Support Vector Machine (SVM) to select the best classifier. We observed that the SVM classifier with Radial Basis Function (RBF) kernel function yielded the best results with an average accuracy of 96%, average sensitivity of 96.9% and average specificity of 97% for 23.6 s duration of EEG data. Our proposed technique can be used as an automatic seizure monitoring software. It can also assist the doctors to cross check the efficacy of their prescribed drugs.

  10. Regional teleseismic body-wave tomography with component-differential finite-frequency sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Yu, Y.; Shen, Y.; Chen, Y. J.

    2015-12-01

    By using ray theory in conjunction with the Born approximation, Dahlen et al. [2000] computed 3-D sensitivity kernels for finite-frequency seismic traveltimes. A series of studies have been conducted based on this theory to model the mantle velocity structure [e.g., Hung et al., 2004; Montelli et al., 2004; Ren and Shen, 2008; Yang et al., 2009; Liang et al., 2011; Tang et al., 2014]. One of the simplifications in the calculation of the kernels is the paraxial assumption, which may not be strictly valid near the receiver, the region of interest in regional teleseismic tomography. In this study, we improve the accuracy of traveltime sensitivity kernels of the first P arrival by eliminating the paraxial approximation. For calculation efficiency, the traveltime table built by the Fast Marching Method (FMM) is used to calculate both the wave vector and the geometrical spreading at every grid in the whole volume. The improved kernels maintain the sign, but with different amplitudes at different locations. We also find that when the directivity of the scattered wave is being taken into consideration, the differential sensitivity kernel of traveltimes measured at the vertical and radial component of the same receiver concentrates beneath the receiver, which can be used to invert for the structure inside the Earth. Compared with conventional teleseismic tomography, which uses the differential traveltimes between two stations in an array, this method is not affected by instrument response and timing errors, and reduces the uncertainty caused by the finite dimension of the model in regional tomography. In addition, the cross-dependence of P traveltimes to S-wave velocity anomaly is significant and sensitive to the structure beneath the receiver. So with the component-differential finite-frequency sensitivity kernel, the anomaly of both P-wave and S-wave velocity and Vp/Vs ratio can be achieved at the same time.

  11. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations.

    PubMed

    Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D; Flint-Garcia, Sherry A

    2016-08-09

    Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. Copyright © 2016 Liu et al.

  12. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    PubMed Central

    Liu, Zhengbin; Garcia, Arturo; McMullen, Michael D.; Flint-Garcia, Sherry A.

    2016-01-01

    Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays) kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis). In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL) for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits. PMID:27317774

  13. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.

  14. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    PubMed

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  15. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    PubMed

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  16. Individual detection of genetically modified maize varieties in non-identity-preserved maize samples.

    PubMed

    Akiyama, Hiroshi; Sakata, Kozue; Kondo, Kazunari; Tanaka, Asako; Liu, Ming S; Oguchi, Taichi; Furui, Satoshi; Kitta, Kazumi; Hino, Akihiro; Teshima, Reiko

    2008-03-26

    In many countries, the labeling of grains and feed- and foodstuffs is mandatory if the genetically modified organism (GMO) content exceeds a certain level of approved GM varieties. The GMO content in a maize sample containing the combined-trait (stacked) GM maize as determined by the currently available methodology is likely to be overestimated. However, there has been little information in the literature on the mixing level and varieties of stacked GM maize in real sample grains. For the first time, the GMO content of non-identity-preserved (non-IP) maize samples imported from the United States has been successfully determined by using a previously developed individual kernel detection system coupled to a multiplex qualitative PCR method followed by multichannel capillary gel electrophoresis system analysis. To clarify the GMO content in the maize samples imported from the United States, determine how many stacked GM traits are contained therein, and which GM trait varieties frequently appeared in 2005, the GMO content (percent) on a kernel basis and the varieties of the GM kernels in the non-IP maize samples imported from the United States were investigated using the individual kernel analysis system. The average (+/-standard deviation) of the GMO contents on a kernel basis in five non-IP sample lots was determined to be 51.0+/-21.6%, the percentage of a single GM trait grains was 39%, and the percentage of the stacked GM trait grains was 12%. The MON810 grains and NK603 grains were the most frequent varieties in the single GM traits. The most frequent stacked GM traits were the MON810xNK603 grains. In addition, the present study would provide the answer and impact for the quantification of GM maize content in the GM maize kernels on labeling regulation.

  17. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    PubMed

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  18. Retrobiosynthetic nuclear magnetic resonance analysis of amino acid biosynthesis and intermediary metabolism. Metabolic flux in developing maize kernels.

    PubMed

    Glawischnig, E; Gierl, A; Tomas, A; Bacher, A; Eisenreich, W

    2001-03-01

    Information on metabolic networks could provide the basis for the design of targets for metabolic engineering. To study metabolic flux in cereals, developing maize (Zea mays) kernels were grown in sterile culture on medium containing [U-(13)C(6)]glucose or [1,2-(13)C(2)]acetate. After growth, amino acids, lipids, and sitosterol were isolated from kernels as well as from the cobs, and their (13)C isotopomer compositions were determined by quantitative nuclear magnetic resonance spectroscopy. The highly specific labeling patterns were used to analyze the metabolic pathways leading to amino acids and the triterpene on a quantitative basis. The data show that serine is generated from phosphoglycerate, as well as from glycine. Lysine is formed entirely via the diaminopimelate pathway and sitosterol is synthesized entirely via the mevalonate route. The labeling data of amino acids and sitosterol were used to reconstruct the labeling patterns of key metabolic intermediates (e.g. acetyl-coenzyme A, pyruvate, phosphoenolpyruvate, erythrose 4-phosphate, and Rib 5-phosphate) that revealed quantitative information about carbon flux in the intermediary metabolism of developing maize kernels. Exogenous acetate served as an efficient precursor of sitosterol, as well as of amino acids of the aspartate and glutamate family; in comparison, metabolites formed in the plastidic compartments showed low acetate incorporation.

  19. New models to predict depth of infiltration in endometrial carcinoma based on transvaginal sonography.

    PubMed

    De Smet, F; De Brabanter, J; Van den Bosch, T; Pochet, N; Amant, F; Van Holsbeke, C; Moerman, P; De Moor, B; Vergote, I; Timmerman, D

    2006-06-01

    Preoperative knowledge of the depth of myometrial infiltration is important in patients with endometrial carcinoma. This study aimed at assessing the value of histopathological parameters obtained from an endometrial biopsy (Pipelle de Cornier; results available preoperatively) and ultrasound measurements obtained after transvaginal sonography with color Doppler imaging in the preoperative prediction of the depth of myometrial invasion, as determined by the final histopathological examination of the hysterectomy specimen (the gold standard). We first collected ultrasound and histopathological data from 97 consecutive women with endometrial carcinoma and divided them into two groups according to surgical stage (Stages Ia and Ib vs. Stages Ic and higher). The areas (AUC) under the receiver-operating characteristics curves of the subjective assessment of depth of invasion by an experienced gynecologist and of the individual ultrasound parameters were calculated. Subsequently, we used these variables to train a logistic regression model and least squares support vector machines (LS-SVM) with linear and RBF (radial basis function) kernels. Finally, these models were validated prospectively on data from 76 new patients in order to make a preoperative prediction of the depth of invasion. Of all ultrasound parameters, the ratio of the endometrial and uterine volumes had the largest AUC (78%), while that of the subjective assessment was 79%. The AUCs of the blood flow indices were low (range, 51-64%). Stepwise logistic regression selected the degree of differentiation, the number of fibroids, the endometrial thickness and the volume of the tumor. Compared with the AUC of the subjective assessment (72%), prospective evaluation of the mathematical models resulted in a higher AUC for the LS-SVM model with an RBF kernel (77%), but this difference was not significant. Single morphological parameters do not improve the predictive power when compared with the subjective assessment of depth of myometrial invasion of endometrial cancer, and blood flow indices do not contribute to the prediction of stage. In this study an LS-SVM model with an RBF kernel gave the best prediction; while this might be more reliable than subjective assessment, confirmation by larger prospective studies is required. Copyright 2006 ISUOG. Published by John Wiley & Sons, Ltd.

  20. Gaussian process regression for tool wear prediction

    NASA Astrophysics Data System (ADS)

    Kong, Dongdong; Chen, Yongjie; Li, Ning

    2018-05-01

    To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.

  1. An introductory analysis of digital infrared thermal imaging guided oral cancer detection using multiresolution rotation invariant texture features

    NASA Astrophysics Data System (ADS)

    Chakraborty, M.; Das Gupta, R.; Mukhopadhyay, S.; Anjum, N.; Patsa, S.; Ray, J. G.

    2017-03-01

    This manuscript presents an analytical treatment on the feasibility of multi-scale Gabor filter bank response for non-invasive oral cancer pre-screening and detection in the long infrared spectrum. Incapability of present healthcare technology to detect oral cancer in budding stage manifests in high mortality rate. The paper contributes a step towards automation in non-invasive computer-aided oral cancer detection using an amalgamation of image processing and machine intelligence paradigms. Previous works have shown the discriminative difference of facial temperature distribution between a normal subject and a patient. The proposed work, for the first time, exploits this difference further by representing the facial Region of Interest(ROI) using multiscale rotation invariant Gabor filter bank responses followed by classification using Radial Basis Function(RBF) kernelized Support Vector Machine(SVM). The proposed study reveals an initial increase in classification accuracy with incrementing image scales followed by degradation of performance; an indication that addition of more and more finer scales tend to embed noisy information instead of discriminative texture patterns. Moreover, the performance is consistently better for filter responses from profile faces compared to frontal faces.This is primarily attributed to the ineptness of Gabor kernels to analyze low spatial frequency components over a small facial surface area. On our dataset comprising of 81 malignant, 59 pre-cancerous, and 63 normal subjects, we achieve state-of-the-art accuracy of 85.16% for normal v/s precancerous and 84.72% for normal v/s malignant classification. This sets a benchmark for further investigation of multiscale feature extraction paradigms in IR spectrum for oral cancer detection.

  2. Spectral Entropy Can Predict Changes of Working Memory Performance Reduced by Short-Time Training in the Delayed-Match-to-Sample Task

    PubMed Central

    Tian, Yin; Zhang, Huiling; Xu, Wei; Zhang, Haiyong; Yang, Li; Zheng, Shuxing; Shi, Yupan

    2017-01-01

    Spectral entropy, which was generated by applying the Shannon entropy concept to the power distribution of the Fourier-transformed electroencephalograph (EEG), was utilized to measure the uniformity of power spectral density underlying EEG when subjects performed the working memory tasks twice, i.e., before and after training. According to Signed Residual Time (SRT) scores based on response speed and accuracy trade-off, 20 subjects were divided into two groups, namely high-performance and low-performance groups, to undertake working memory (WM) tasks. We found that spectral entropy derived from the retention period of WM on channel FC4 exhibited a high correlation with SRT scores. To this end, spectral entropy was used in support vector machine classifier with linear kernel to differentiate these two groups. Receiver operating characteristics analysis and leave-one out cross-validation (LOOCV) demonstrated that the averaged classification accuracy (CA) was 90.0 and 92.5% for intra-session and inter-session, respectively, indicating that spectral entropy could be used to distinguish these two different WM performance groups successfully. Furthermore, the support vector regression prediction model with radial basis function kernel and the root-mean-square error of prediction revealed that spectral entropy could be utilized to predict SRT scores on individual WM performance. After testing the changes in SRT scores and spectral entropy for each subject by short-time training, we found that 16 in 20 subjects’ SRT scores were clearly promoted after training and 15 in 20 subjects’ SRT scores showed consistent changes with spectral entropy before and after training. The findings revealed that spectral entropy could be a promising indicator to predict individual’s WM changes by training and further provide a novel application about WM for brain–computer interfaces. PMID:28912701

  3. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    PubMed

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  5. QTL Mapping of Kernel Number-Related Traits and Validation of One Major QTL for Ear Length in Maize.

    PubMed

    Huo, Dongao; Ning, Qiang; Shen, Xiaomeng; Liu, Lei; Zhang, Zuxin

    2016-01-01

    The kernel number is a grain yield component and an important maize breeding goal. Ear length, kernel number per row and ear row number are highly correlated with the kernel number per ear, which eventually determines the ear weight and grain yield. In this study, two sets of F2:3 families developed from two bi-parental crosses sharing one inbred line were used to identify quantitative trait loci (QTL) for four kernel number-related traits: ear length, kernel number per row, ear row number and ear weight. A total of 39 QTLs for the four traits were identified in the two populations. The phenotypic variance explained by a single QTL ranged from 0.4% to 29.5%. Additionally, 14 overlapping QTLs formed 5 QTL clusters on chromosomes 1, 4, 5, 7, and 10. Intriguingly, six QTLs for ear length and kernel number per row overlapped in a region on chromosome 1. This region was designated qEL1.10 and was validated as being simultaneously responsible for ear length, kernel number per row and ear weight in a near isogenic line-derived population, suggesting that qEL1.10 was a pleiotropic QTL with large effects. Furthermore, the performance of hybrids generated by crossing 6 elite inbred lines with two near isogenic lines at qEL1.10 showed the breeding value of qEL1.10 for the improvement of the kernel number and grain yield of maize hybrids. This study provides a basis for further fine mapping, molecular marker-aided breeding and functional studies of kernel number-related traits in maize.

  6. Modeling multivariate time series on manifolds with skew radial basis functions.

    PubMed

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  7. Stochastic subset selection for learning with kernel machines.

    PubMed

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  8. Identification of Fourier transform infrared photoacoustic spectral features for detection of Aspergillus flavus infection in corn.

    PubMed

    Gordon, S H; Schudy, R B; Wheeler, B C; Wicklow, D T; Greene, R V

    1997-04-01

    Aspergillus flavus and other pathogenic fungi display typical infrared spectra which differ significantly from spectra of substrate materials such as corn. On this basis, specific spectral features have been identified which permit detection of fungal infection on the surface of corn kernels by photoacoustic infrared spectroscopy. In a blind study, ten corn kernels showing bright greenish yellow fluorescence (BGYF) in the germ or endosperm and ten BGYF-negative kernels were correctly classified as infected or not infected by Fourier transform infrared photoacoustic spectroscopy. Earlier studies have shown that BGYF-positive kernels contain the bulk of the aflatoxin contaminating grain at harvest. Ten major spectral features, identified by visual inspection of the photoacoustic spectra of A. flavus mycelium grown in culture versus uninfected corn, were interpreted and assigned by theoretical comparisons of the relative chemical compositions of fungi and corn. The spectral features can be built into either empirical or knowledge-based computer models (expert systems) for automatic infrared detection and segregation of grains or kernels containing aflatoxin from the food and feed supply.

  9. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    PubMed Central

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  10. On the integral inversion of satellite-to-satellite velocity differences for local gravity field recovery: a theoretical study

    NASA Astrophysics Data System (ADS)

    Eshagh, Mehdi; Šprlák, Michal

    2016-02-01

    The gravity field can be recovered locally from the satellite-to-satellite velocity differences (VDs) between twin-satellites moving in the same orbit. To do so, three different integral formulae are derived in this paper to recover geoid height, radial component of gravity anomaly and gravity disturbance at sea level. Their kernel functions contain the product of two Legendre polynomials with different arguments. Such kernels are relatively complicated and it may be impossible to find their closed-forms. However, we could find the one related to recovering the geoid height from the VD data. The use of spectral forms of the kernels is possible and one does not have to generate them to very high degrees. The kernel functions are well-behaving meaning that they reduce the contribution of far-zone data and for example a cap margin of 7° is enough for recovering gravity anomalies. This means that the inversion area should be larger by 7° from all directions than the desired area to reduce the effect of spatial truncation error of the integral formula. Numerical studies using simulated data over Fennoscandia showed that when the distance between the twin-satellites is small, higher frequencies of the anomalies can be recovered from the VD data. In the ideal case of having short distance between the satellites flying at 250 km level, recovering radial component of gravity anomaly with an accuracy of 7 mGal is possible over Fennoscandia, if the VD data is contaminated only with the spatial truncation error, which is an ideal assumption. However, the problem is that the power of VD signal is very low when the satellites are close and it is very difficult to recognise the signal amongst the noise of the VD data. We also show that for a successful determination of gravity anomalies at sea level from an altitude of 250 km mean VDs with better accuracy than 0.01 mm/s are required. When coloured noise at this level is used for the VDs at 250 km with separation of 300 km, the accuracy of recovery will be about 11 mGal over Fennoscandia. In the case of using the real velocities of the satellites, the main problems are downward/upward continuation of the VDs on the mean orbital sphere and taking the azimuthal integration of them.

  11. Automated detection of heart ailments from 12-lead ECG using complex wavelet sub-band bi-spectrum features

    PubMed Central

    Dandapat, Samarendra

    2017-01-01

    The complex wavelet sub-band bi-spectrum (CWSB) features are proposed for detection and classification of myocardial infarction (MI), heart muscle disease (HMD) and bundle branch block (BBB) from 12-lead ECG. The dual tree CW transform of 12-lead ECG produces CW coefficients at different sub-bands. The higher-order CW analysis is used for evaluation of CWSB. The mean of the absolute value of CWSB, and the number of negative phase angle and the number of positive phase angle features from the phase of CWSB of 12-lead ECG are evaluated. Extreme learning machine and support vector machine (SVM) classifiers are used to evaluate the performance of CWSB features. Experimental results show that the proposed CWSB features of 12-lead ECG and the SVM classifier are successful for classification of various heart pathologies. The individual accuracy values for MI, HMD and BBB classes are obtained as 98.37, 97.39 and 96.40%, respectively, using SVM classifier and radial basis function kernel function. A comparison has also been made with existing 12-lead ECG-based cardiac disease detection techniques. PMID:28894589

  12. Prediction of Drug-Plasma Protein Binding Using Artificial Intelligence Based Algorithms.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2018-01-01

    Plasma protein binding (PPB) has vital importance in the characterization of drug distribution in the systemic circulation. Unfavorable PPB can pose a negative effect on clinical development of promising drug candidates. The drug distribution properties should be considered at the initial phases of the drug design and development. Therefore, PPB prediction models are receiving an increased attention. In the current study, we present a systematic approach using Support vector machine, Artificial neural network, k- nearest neighbor, Probabilistic neural network, Partial least square and Linear discriminant analysis to relate various in vitro and in silico molecular descriptors to a diverse dataset of 736 drugs/drug-like compounds. The overall accuracy of Support vector machine with Radial basis function kernel came out to be comparatively better than the rest of the applied algorithms. The training set accuracy, validation set accuracy, precision, sensitivity, specificity and F1 score for the Suprort vector machine was found to be 89.73%, 89.97%, 92.56%, 87.26%, 91.97% and 0.898, respectively. This model can potentially be useful in screening of relevant drug candidates at the preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. In silico prediction of Tetrahymena pyriformis toxicity for diverse industrial chemicals with substructure pattern recognition and machine learning methods.

    PubMed

    Cheng, Feixiong; Shen, Jie; Yu, Yue; Li, Weihua; Liu, Guixia; Lee, Philip W; Tang, Yun

    2011-03-01

    There is an increasing need for the rapid safety assessment of chemicals by both industries and regulatory agencies throughout the world. In silico techniques are practical alternatives in the environmental hazard assessment. It is especially true to address the persistence, bioaccumulative and toxicity potentials of organic chemicals. Tetrahymena pyriformis toxicity is often used as a toxic endpoint. In this study, 1571 diverse unique chemicals were collected from the literature and composed of the largest diverse data set for T. pyriformis toxicity. Classification predictive models of T. pyriformis toxicity were developed by substructure pattern recognition and different machine learning methods, including support vector machine (SVM), C4.5 decision tree, k-nearest neighbors and random forest. The results of a 5-fold cross-validation showed that the SVM method performed better than other algorithms. The overall predictive accuracies of the SVM classification model with radial basis functions kernel was 92.2% for the 5-fold cross-validation and 92.6% for the external validation set, respectively. Furthermore, several representative substructure patterns for characterizing T. pyriformis toxicity were also identified via the information gain analysis methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. A Machine Learning Approach to the Detection of Pilot's Reaction to Unexpected Events Based on EEG Signals

    PubMed Central

    Cyran, Krzysztof A.

    2018-01-01

    This work considers the problem of utilizing electroencephalographic signals for use in systems designed for monitoring and enhancing the performance of aircraft pilots. Systems with such capabilities are generally referred to as cognitive cockpits. This article provides a description of the potential that is carried by such systems, especially in terms of increasing flight safety. Additionally, a neuropsychological background of the problem is presented. Conducted research was focused mainly on the problem of discrimination between states of brain activity related to idle but focused anticipation of visual cue and reaction to it. Especially, a problem of selecting a proper classification algorithm for such problems is being examined. For that purpose an experiment involving 10 subjects was planned and conducted. Experimental electroencephalographic data was acquired using an Emotiv EPOC+ headset. Proposed methodology involved use of a popular method in biomedical signal processing, the Common Spatial Pattern, extraction of bandpower features, and an extensive test of different classification algorithms, such as Linear Discriminant Analysis, k-nearest neighbors, and Support Vector Machines with linear and radial basis function kernels, Random Forests, and Artificial Neural Networks. PMID:29849544

  15. Sitting Posture Monitoring System Based on a Low-Cost Load Cell Using Machine Learning

    PubMed Central

    Roh, Jongryun; Park, Hyeong-jun; Lee, Kwang Jin; Hyeong, Joonho; Kim, Sayup

    2018-01-01

    Sitting posture monitoring systems (SPMSs) help assess the posture of a seated person in real-time and improve sitting posture. To date, SPMS studies reported have required many sensors mounted on the backrest plate and seat plate of a chair. The present study, therefore, developed a system that measures a total of six sitting postures including the posture that applied a load to the backrest plate, with four load cells mounted only on the seat plate. Various machine learning algorithms were applied to the body weight ratio measured by the developed SPMS to identify the method that most accurately classified the actual sitting posture of the seated person. After classifying the sitting postures using several classifiers, average and maximum classification rates of 97.20% and 97.94%, respectively, were obtained from nine subjects with a support vector machine using the radial basis function kernel; the results obtained by this classifier showed a statistically significant difference from the results of multiple classifications using other classifiers. The proposed SPMS was able to classify six sitting postures including the posture with loading on the backrest and showed the possibility of classifying the sitting posture even though the number of sensors is reduced. PMID:29329261

  16. Potential of cancer screening with serum surface-enhanced Raman spectroscopy and a support vector machine

    NASA Astrophysics Data System (ADS)

    Li, S. X.; Zhang, Y. J.; Zeng, Q. Y.; Li, L. F.; Guo, Z. Y.; Liu, Z. M.; Xiong, H. L.; Liu, S. H.

    2014-06-01

    Cancer is the most common disease to threaten human health. The ability to screen individuals with malignant tumours with only a blood sample would be greatly advantageous to early diagnosis and intervention. This study explores the possibility of discriminating between cancer patients and normal subjects with serum surface-enhanced Raman spectroscopy (SERS) and a support vector machine (SVM) through a peripheral blood sample. A total of 130 blood samples were obtained from patients with liver cancer, colonic cancer, esophageal cancer, nasopharyngeal cancer, gastric cancer, as well as 113 blood samples from normal volunteers. Several diagnostic models were built with the serum SERS spectra using SVM and principal component analysis (PCA) techniques. The results show that a diagnostic accuracy of 85.5% is acquired with a PCA algorithm, while a diagnostic accuracy of 95.8% is obtained using radial basis function (RBF), PCA-SVM methods. The results prove that a RBF kernel PCA-SVM technique is superior to PCA and conventional SVM (C-SVM) algorithms in classification serum SERS spectra. The study demonstrates that serum SERS, in combination with SVM techniques, has great potential for screening cancerous patients with any solid malignant tumour through a peripheral blood sample.

  17. Using radial NMR profiles to characterize pore size distributions

    NASA Astrophysics Data System (ADS)

    Deriche, Rachid; Treilhard, John

    2012-02-01

    Extracting information about axon diameter distributions in the brain is a challenging task which provides useful information for medical purposes; for example, the ability to characterize and monitor axon diameters would be useful in diagnosing and investigating diseases like amyotrophic lateral sclerosis (ALS)1 or autism.2 Three families of operators are defined by Ozarslan,3 whose action upon an NMR attenuation signal extracts the moments of the pore size distribution of the ensemble under consideration; also a numerical method is proposed to continuously reconstruct a discretely sampled attenuation profile using the eigenfunctions of the simple harmonic oscillator Hamiltonian: the SHORE basis. The work presented here extends Ozarlan's method to other bases that can offer a better description of attenuation signal behaviour; in particular, we propose the use of the radial Spherical Polar Fourier (SPF) basis. Testing is performed to contrast the efficacy of the radial SPF basis and SHORE basis in practical attenuation signal reconstruction. The robustness of the method to additive noise is tested and analysed. We demonstrate that a low-order attenuation signal reconstruction outperforms a higher-order reconstruction in subsequent moment estimation under noisy conditions. We propose the simulated annealing algorithm for basis function scale parameter estimation. Finally, analytic expressions are derived and presented for the action of the operators on the radial SPF basis (obviating the need for numerical integration, thus avoiding a spectrum of possible sources of error).

  18. Kinetic Rate Kernels via Hierarchical Liouville-Space Projection Operator Approach.

    PubMed

    Zhang, Hou-Dao; Yan, YiJing

    2016-05-19

    Kinetic rate kernels in general multisite systems are formulated on the basis of a nonperturbative quantum dissipation theory, the hierarchical equations of motion (HEOM) formalism, together with the Nakajima-Zwanzig projection operator technique. The present approach exploits the HEOM-space linear algebra. The quantum non-Markovian site-to-site transfer rate can be faithfully evaluated via projected HEOM dynamics. The developed method is exact, as evident by the comparison to the direct HEOM evaluation results on the population evolution.

  19. Characterization and fine mapping of qkc7.03: a major locus for kernel cracking in maize.

    PubMed

    Yang, Mingtao; Chen, Lin; Wu, Xun; Gao, Xing; Li, Chunhui; Song, Yanchun; Zhang, Dengfeng; Shi, Yunsu; Li, Yu; Li, Yong-Xiang; Wang, Tianyu

    2018-02-01

    A major locus conferring kernel cracking in maize was characterized and fine mapped to an interval of 416.27 kb. Meanwhile, combining the results of transcriptomic analysis, the candidate gene was inferred. Seed development requires a proper structural and physiological balance between the maternal tissues and the internal structures of the seeds. In maize, kernel cracking is a disorder in this balance that seriously limits quality and yield and is characterized by a cracked pericarp at the kernel top and endosperm everting. This study elucidated the genetic basis and characterization of kernel cracking. Primarily, a near isogenic line (NIL) with a B73 background exhibited steady kernel cracking across environments. Therefore, deprived mapping populations were developed from this NIL and its recurrent parent B73. A major locus on chromosome 7, qkc7.03, was identified to be associated with the cracking performance. According to a progeny test of recombination events, qkc7.03 was fine mapped to a physical interval of 416.27 kb. In addition, obvious differences were observed in embryo development and starch granule arrangement within the endosperm between the NIL and its recurrent parent upon the occurrence of kernel cracking. Moreover, compared to its recurrent parent, the transcriptome of the NIL showed a significantly down-regulated expression of genes related to zeins, carbohydrate synthesis and MADS-domain transcription factors. The transcriptomic analysis revealed ten annotated genes within the target region of qkc7.03, and only GRMZM5G899476 was differently expressed between the NIL and its recurrent parent, indicating that this gene might be a candidate gene for kernel cracking. The results of this study facilitate the understanding of the potential mechanism underlying kernel cracking in maize.

  20. Numerical Technique for Analyzing Rotating Rake Mode Measurements in a Duct With Passive Treatment and Shear Flow

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Sutliff, Daniel L.

    2007-01-01

    A technique is presented for the analysis of measured data obtained from a rotating microphone rake system. The system is designed to measure the interaction modes of ducted fans. A Fourier analysis of the data from the rotating system results in a set of circumferential mode levels at each radial location of a microphone inside the duct. Radial basis functions are then least-squares fit to this data to obtain the radial mode amplitudes. For ducts with soft walls and mean flow, the radial basis functions must be numerically computed. The linear companion matrix method is used to obtain both the eigenvalues of interest, without an initial guess, and the radial basis functions. The governing equations allow for the mean flow to have a boundary layer at the wall. In addition, a nonlinear least-squares method is used to adjust the wall impedance to best fit the data in an attempt to use the rotating system as an in-duct wall impedance measurement tool. Simulated and measured data are used to show the effects of wall impedance and mean flow on the computed results.

  1. A well-balanced meshless tsunami propagation and inundation model

    NASA Astrophysics Data System (ADS)

    Brecht, Rüdiger; Bihlo, Alexander; MacLachlan, Scott; Behrens, Jörn

    2018-05-01

    We present a novel meshless tsunami propagation and inundation model. We discretize the nonlinear shallow-water equations using a well-balanced scheme relying on radial basis function based finite differences. For the inundation model, radial basis functions are used to extrapolate the dry region from nearby wet points. Numerical results against standard one- and two-dimensional benchmarks are presented.

  2. RBF kernel based support vector regression to estimate the blood volume and heart rate responses during hemodialysis.

    PubMed

    Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H

    2009-01-01

    This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).

  3. Genetic variability of the phloem sap metabolite content of maize (Zea mays L.) during the kernel-filling period.

    PubMed

    Yesbergenova-Cuny, Zhazira; Dinant, Sylvie; Martin-Magniette, Marie-Laure; Quilleré, Isabelle; Armengaud, Patrick; Monfalet, Priscilla; Lea, Peter J; Hirel, Bertrand

    2016-11-01

    Using a metabolomic approach, we have quantified the metabolite composition of the phloem sap exudate of seventeen European and American lines of maize that had been previously classified into five main groups on the basis of molecular marker polymorphisms. In addition to sucrose, glutamate and aspartate, which are abundant in the phloem sap of many plant species, large quantities of aconitate and alanine were also found in the phloem sap exudates of maize. Genetic variability of the phloem sap composition was observed in the different maize lines, although there was no obvious relationship between the phloem sap composition and the five previously classified groups. However, following hierarchical clustering analysis there was a clear relationship between two of the subclusters of lines defined on the basis of the composition of the phloem sap exudate and the earliness of silking date. A comparison between the metabolite contents of the ear leaves and the phloem sap exudates of each genotype, revealed that the relative content of most of the carbon- and nitrogen-containing metabolites was similar. Correlation studies performed between the metabolite content of the phloem sap exudates and yield-related traits also revealed that for some carbohydrates such as arabitol and sucrose there was a negative or positive correlation with kernel yield and kernel weight respectively. A posititive correlation was also found between kernel number and soluble histidine. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Prediction of gas chromatographic retention indices by the use of radial basis function neural networks.

    PubMed

    Yao, Xiaojun; Zhang, Xiaoyun; Zhang, Ruisheng; Liu, Mancang; Hu, Zhide; Fan, Botao

    2002-05-16

    A new method for the prediction of retention indices for a diverse set of compounds from their physicochemical parameters has been proposed. The two used input parameters for representing molecular properties are boiling point and molar volume. Models relating relationships between physicochemical parameters and retention indices of compounds are constructed by means of radial basis function neural networks. To get the best prediction results, some strategies are also employed to optimize the topology and learning parameters of the RBFNNs. For the test set, a predictive correlation coefficient R=0.9910 and root mean squared error of 14.1 are obtained. Results show that radial basis function networks can give satisfactory prediction ability and its optimization is less-time consuming and easy to implement.

  5. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  6. Observation of a 3D Magnetic Null Point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, P.; Falco, M.; Guglielmino, S. L.

    2017-03-10

    We describe high-resolution observations of a GOES B-class flare characterized by a circular ribbon at the chromospheric level, corresponding to the network at the photospheric level. We interpret the flare as a consequence of a magnetic reconnection event that occurred at a three-dimensional (3D) coronal null point located above the supergranular cell. The potential field extrapolation of the photospheric magnetic field indicates that the circular chromospheric ribbon is cospatial with the fan footpoints, while the ribbons of the inner and outer spines look like compact kernels. We found new interesting observational aspects that need to be explained by models: (1)more » a loop corresponding to the outer spine became brighter a few minutes before the onset of the flare; (2) the circular ribbon was formed by several adjacent compact kernels characterized by a size of 1″–2″; (3) the kernels with a stronger intensity emission were located at the outer footpoint of the darker filaments, departing radially from the center of the supergranular cell; (4) these kernels started to brighten sequentially in clockwise direction; and (5) the site of the 3D null point and the shape of the outer spine were detected by RHESSI in the low-energy channel between 6.0 and 12.0 keV. Taking into account all these features and the length scales of the magnetic systems involved in the event, we argue that the low intensity of the flare may be ascribed to the low amount of magnetic flux and to its symmetric configuration.« less

  7. Heavy and Heavy-Light Mesons in the Covariant Spectator Theory

    NASA Astrophysics Data System (ADS)

    Stadler, Alfred; Leitão, Sofia; Peña, M. T.; Biernat, Elmar P.

    2018-05-01

    The masses and vertex functions of heavy and heavy-light mesons, described as quark-antiquark bound states, are calculated with the Covariant Spectator Theory (CST). We use a kernel with an adjustable mixture of Lorentz scalar, pseudoscalar, and vector linear confining interaction, together with a one-gluon-exchange kernel. A series of fits to the heavy and heavy-light meson spectrum were calculated, and we discuss what conclusions can be drawn from it, especially about the Lorentz structure of the kernel. We also apply the Brodsky-Huang-Lepage prescription to express the CST wave functions for heavy quarkonia in terms of light-front variables. They agree remarkably well with light-front wave functions obtained in the Hamiltonian basis light-front quantization approach, even in excited states.

  8. An Intelligent Approach to Educational Data: Performance Comparison of the Multilayer Perceptron and the Radial Basis Function Artificial Neural Networks

    ERIC Educational Resources Information Center

    Kayri, Murat

    2015-01-01

    The objective of this study is twofold: (1) to investigate the factors that affect the success of university students by employing two artificial neural network methods (i.e., multilayer perceptron [MLP] and radial basis function [RBF]); and (2) to compare the effects of these methods on educational data in terms of predictive ability. The…

  9. A Radial Basis Function Approach to Financial Time Series Analysis

    DTIC Science & Technology

    1993-12-01

    including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes

  10. Singularity Preserving Numerical Methods for Boundary Integral Equations

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki (Principal Investigator)

    1996-01-01

    In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.

  11. A 3D Ginibre Point Field

    NASA Astrophysics Data System (ADS)

    Kargin, Vladislav

    2018-06-01

    We introduce a family of three-dimensional random point fields using the concept of the quaternion determinant. The kernel of each field is an n-dimensional orthogonal projection on a linear space of quaternionic polynomials. We find explicit formulas for the basis of the orthogonal quaternion polynomials and for the kernel of the projection. For number of particles n → ∞, we calculate the scaling limits of the point field in the bulk and at the center of coordinates. We compare our construction with the previously introduced Fermi-sphere point field process.

  12. A Meshless Method Using Radial Basis Functions for Beam Bending Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2004-01-01

    A meshless local Petrov-Galerkin (MLPG) method that uses radial basis functions (RBFs) as trial functions in the study of Euler-Bernoulli beam problems is presented. RBFs, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as they are in the conventional MLPG method. Compactly and noncompactly supported RBFs are considered. Noncompactly supported cubic RBFs are found to be preferable. Patch tests, mixed boundary value problems, and problems with complex loading conditions are considered. Results obtained from the radial basis MLPG method are either of comparable or better accuracy than those obtained when using the conventional MLPG method.

  13. Hi-Res scan mode in clinical MDCT systems: Experimental assessment of spatial resolution performance

    PubMed Central

    Cruz-Bastida, Juan P.; Gomez-Cardona, Daniel; Li, Ke; Sun, Heyi; Hsieh, Jiang; Szczykutowicz, Timothy P.; Chen, Guang-Hong

    2016-01-01

    Purpose: The introduction of a High-Resolution (Hi-Res) scan mode and another associated option that combines Hi-Res mode with the so-called High Definition (HD) reconstruction kernels (referred to as a Hi-Res/HD mode in this paper) in some multi-detector CT (MDCT) systems offers new opportunities to increase spatial resolution for some clinical applications that demand high spatial resolution. The purpose of this work was to quantify the in-plane spatial resolution along both the radial direction and tangential direction for the Hi-Res and Hi-Res/HD scan modes at different off-center positions. Methods: A technique was introduced and validated to address the signal saturation problem encountered in the attempt to quantify spatial resolution for the Hi-Res and Hi-Res/HD scan modes. Using the proposed method, the modulation transfer functions (MTFs) of a 64-slice MDCT system (Discovery CT750 HD, GE Healthcare) equipped with both Hi-Res and Hi-Res/HD modes were measured using a metal bead at nine different off-centered positions (0–16 cm with a step size of 2 cm); at each position, both conventional scans and Hi-Res scans were performed. For each type of scan and position, 80 repeated acquisitions were performed to reduce noise induced uncertainties in the MTF measurements. A total of 15 reconstruction kernels, including eight conventional kernels and seven HD kernels, were used to reconstruct CT images of the bead. An ex vivo animal study consisting of a bone fracture model was performed to corroborate the MTF results, as the detection of this high-contrast and high frequency task is predominantly determined by spatial resolution. Images of this animal model generated by different scan modes and reconstruction kernels were qualitatively compared with the MTF results. Results: At the centered position, the use of Hi-Res mode resulted in a slight improvement in the MTF; each HD kernel generated higher spatial resolution than its counterpart conventional kernel. However, the MTF along the tangential direction of the scan field of view (SFOV) was significantly degraded at off-centered positions, yet the combined Hi-Res/HD mode reduced this azimuthal MTF degradation. Images of the animal bone fracture model confirmed the improved spatial resolution at the off-centered positions through the use of the Hi-Res mode and HD kernels. Conclusions: The Hi-Res/HD scan improve spatial resolution of MDCT systems at both centered and off-centered positions. PMID:27147351

  14. Hi-Res scan mode in clinical MDCT systems: Experimental assessment of spatial resolution performance.

    PubMed

    Cruz-Bastida, Juan P; Gomez-Cardona, Daniel; Li, Ke; Sun, Heyi; Hsieh, Jiang; Szczykutowicz, Timothy P; Chen, Guang-Hong

    2016-05-01

    The introduction of a High-Resolution (Hi-Res) scan mode and another associated option that combines Hi-Res mode with the so-called High Definition (HD) reconstruction kernels (referred to as a Hi-Res/HD mode in this paper) in some multi-detector CT (MDCT) systems offers new opportunities to increase spatial resolution for some clinical applications that demand high spatial resolution. The purpose of this work was to quantify the in-plane spatial resolution along both the radial direction and tangential direction for the Hi-Res and Hi-Res/HD scan modes at different off-center positions. A technique was introduced and validated to address the signal saturation problem encountered in the attempt to quantify spatial resolution for the Hi-Res and Hi-Res/HD scan modes. Using the proposed method, the modulation transfer functions (MTFs) of a 64-slice MDCT system (Discovery CT750 HD, GE Healthcare) equipped with both Hi-Res and Hi-Res/HD modes were measured using a metal bead at nine different off-centered positions (0-16 cm with a step size of 2 cm); at each position, both conventional scans and Hi-Res scans were performed. For each type of scan and position, 80 repeated acquisitions were performed to reduce noise induced uncertainties in the MTF measurements. A total of 15 reconstruction kernels, including eight conventional kernels and seven HD kernels, were used to reconstruct CT images of the bead. An ex vivo animal study consisting of a bone fracture model was performed to corroborate the MTF results, as the detection of this high-contrast and high frequency task is predominantly determined by spatial resolution. Images of this animal model generated by different scan modes and reconstruction kernels were qualitatively compared with the MTF results. At the centered position, the use of Hi-Res mode resulted in a slight improvement in the MTF; each HD kernel generated higher spatial resolution than its counterpart conventional kernel. However, the MTF along the tangential direction of the scan field of view (SFOV) was significantly degraded at off-centered positions, yet the combined Hi-Res/HD mode reduced this azimuthal MTF degradation. Images of the animal bone fracture model confirmed the improved spatial resolution at the off-centered positions through the use of the Hi-Res mode and HD kernels. The Hi-Res/HD scan improve spatial resolution of MDCT systems at both centered and off-centered positions.

  15. Numerical study of the shape parameter dependence of the local radial point interpolation method in linear elasticity.

    PubMed

    Moussaoui, Ahmed; Bouziane, Touria

    2016-01-01

    The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).

  16. 7 CFR 810.1403 - Basis of determination.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... broken kernels, and foreign material that will pass through a 1.98 mm (5/64 inches) triangular-hole sieve... (5/64 inches) triangular-hole sieve. [57 FR 58971, Dec. 14, 1992] Grades and Grade Requirements ...

  17. 7 CFR 810.1403 - Basis of determination.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... broken kernels, and foreign material that will pass through a 1.98 mm (5/64 inches) triangular-hole sieve... (5/64 inches) triangular-hole sieve. [57 FR 58971, Dec. 14, 1992] Grades and Grade Requirements ...

  18. 7 CFR 810.1403 - Basis of determination.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... broken kernels, and foreign material that will pass through a 1.98 mm (5/64 inches) triangular-hole sieve... (5/64 inches) triangular-hole sieve. [57 FR 58971, Dec. 14, 1992] Grades and Grade Requirements ...

  19. 7 CFR 810.1403 - Basis of determination.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... broken kernels, and foreign material that will pass through a 1.98 mm (5/64 inches) triangular-hole sieve... (5/64 inches) triangular-hole sieve. [57 FR 58971, Dec. 14, 1992] Grades and Grade Requirements ...

  20. Absorbed dose kernel and self-shielding calculations for a novel radiopaque glass microsphere for transarterial radioembolization.

    PubMed

    Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair

    2018-02-01

    Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the microspheres based on weighted activities. The shapes of the absorbed dose kernels are dominated at short times postactivation by the contributions of 70 Ga and 72 Ga. Following decay of the short-lived contaminants, the absorbed dose kernel is effectively that of 90 Y. After approximately 1000 h postactivation, the contributions of 85 Sr and 89 Sr become increasingly dominant, though the absorbed dose-rate around the beads drops by roughly four orders of magnitude. The introduction of high atomic number elements for the purpose of increasing radiopacity necessarily leads to the production of radionuclides other than 90 Y in the microspheres. Most of the radionuclides in this study are short-lived and are likely not of any significant concern for this therapeutic agent. The presence of small quantities of longer lived radionuclides will change the shape of the absorbed dose kernel around a microsphere at long time points postadministration when activity levels are significantly reduced. © 2017 American Association of Physicists in Medicine.

  1. A Decision-Making Method with Grey Multi-Source Heterogeneous Data and Its Application in Green Supplier Selection

    PubMed Central

    Dang, Yaoguo; Mao, Wenxin

    2018-01-01

    In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method. PMID:29510521

  2. A Decision-Making Method with Grey Multi-Source Heterogeneous Data and Its Application in Green Supplier Selection.

    PubMed

    Sun, Huifang; Dang, Yaoguo; Mao, Wenxin

    2018-03-03

    In view of the multi-attribute decision-making problem that the attribute values are grey multi-source heterogeneous data, a decision-making method based on kernel and greyness degree is proposed. The definitions of kernel and greyness degree of an extended grey number in a grey multi-source heterogeneous data sequence are given. On this basis, we construct the kernel vector and greyness degree vector of the sequence to whiten the multi-source heterogeneous information, then a grey relational bi-directional projection ranking method is presented. Considering the multi-attribute multi-level decision structure and the causalities between attributes in decision-making problem, the HG-DEMATEL method is proposed to determine the hierarchical attribute weights. A green supplier selection example is provided to demonstrate the rationality and validity of the proposed method.

  3. Efficient High-Fidelity, Geometrically Exact, Multiphysics Structural Models

    DTIC Science & Technology

    2011-10-14

    fuctionally graded core. International Journal for Numerical Methods in Engineering, 68:940– 966, 2006. 7F. Shang, Z. Wang, and Z. Li. Analysis of...normal deformable plate theory and MLPG method with radial basis fuctions . Composite Structures, 80:539– 552, 2007. 17W. Zhen and W. Chen. A higher-order...functionally graded plates by using higher-order shear and normal deformable plate theory and MLPG method with radial basis fuctions . Composite Structures, 80

  4. Validating the Kinematic Wave Approach for Rapid Soil Erosion Assessment and Improved BMP Site Selection to Enhance Training Land Sustainability

    DTIC Science & Technology

    2014-02-01

    installation based on a Euclidean distance allocation and assigned that installation’s threshold values. The second approach used a thin - plate spline ...installation critical nLS+ thresholds involved spatial interpolation. A thin - plate spline radial basis functions (RBF) was selected as the...the interpolation of installation results using a thin - plate spline radial basis function technique. 6.5 OBJECTIVE #5: DEVELOP AND

  5. Reconfigurable Flight Control Design using a Robust Servo LQR and Radial Basis Function Neural Networks

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    2005-01-01

    This viewgraph presentation reviews the use of a Robust Servo Linear Quadratic Regulator (LQR) and a Radial Basis Function (RBF) Neural Network in reconfigurable flight control designs in adaptation to a aircraft part failure. The method uses a robust LQR servomechanism design with model Reference adaptive control, and RBF neural networks. During the failure the LQR servomechanism behaved well, and using the neural networks improved the tracking.

  6. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  7. Morphing of spatial objects in real time with interpolation by functions of radial and orthogonal basis

    NASA Astrophysics Data System (ADS)

    Kosnikov, Yu N.; Kuzmin, A. V.; Ho, Hoang Thai

    2018-05-01

    The article is devoted to visualization of spatial objects’ morphing described by the set of unordered reference points. A two-stage model construction is proposed to change object’s form in real time. The first (preliminary) stage is interpolation of the object’s surface by radial basis functions. Initial reference points are replaced by new spatially ordered ones. Reference points’ coordinates change patterns during the process of morphing are assigned. The second (real time) stage is surface reconstruction by blending functions of orthogonal basis. Finite differences formulas are applied to increase the productivity of calculations.

  8. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    PubMed

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  9. An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.

    PubMed

    Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan

    2015-01-01

    A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.

  10. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    PubMed Central

    Gabere, Musa Nur; Hussein, Mohamed Aly; Aziz, Mohammad Azhar

    2016-01-01

    Purpose There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC). The selection of important features is a crucial step before training a classifier. Methods In this study, we built a model that uses support vector machine (SVM) to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR) technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid). Results The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF), Bayes net (BN), multilayer perceptron (MLP), naïve Bayes (NB), reduced error pruning tree (REPT), and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP). Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1, MMP7, and TGFB1 were predicted to be CRC biomarkers. Conclusion This model could be used to further develop a diagnostic tool for predicting CRC based on gene expression data from patient samples. PMID:27330311

  11. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies.

    PubMed

    Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed

    2018-02-06

    Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.

  12. QTL Analysis of Kernel-Related Traits in Maize Using an Immortalized F2 Population

    PubMed Central

    Hu, Yanmin; Li, Weihua; Fu, Zhiyuan; Ding, Dong; Li, Haochuan; Qiao, Mengmeng; Tang, Jihua

    2014-01-01

    Kernel size and weight are important determinants of grain yield in maize. In this study, multivariate conditional and unconditional quantitative trait loci (QTL), and digenic epistatic analyses were utilized in order to elucidate the genetic basis for these kernel-related traits. Five kernel-related traits, including kernel weight (KW), volume (KV), length (KL), thickness (KT), and width (KWI), were collected from an immortalized F2 (IF2) maize population comprising of 243 crosses performed at two separate locations over a span of two years. A total of 54 unconditional main QTL for these five kernel-related traits were identified, many of which were clustered in chromosomal bins 6.04–6.06, 7.02–7.03, and 10.06–10.07. In addition, qKL3, qKWI6, qKV10a, qKV10b, qKW10a, and qKW7a were detected across multiple environments. Sixteen main QTL were identified for KW conditioned on the other four kernel traits (KL, KWI, KT, and KV). Thirteen main QTL were identified for KV conditioned on three kernel-shape traits. Conditional mapping analysis revealed that KWI and KV had the strongest influence on KW at the individual QTL level, followed by KT, and then KL; KV was mostly strongly influenced by KT, followed by KWI, and was least impacted by KL. Digenic epistatic analysis identified 18 digenic interactions involving 34 loci over the entire genome. However, only a small proportion of them were identical to the main QTL we detected. Additionally, conditional digenic epistatic analysis revealed that the digenic epistasis for KW and KV were entirely determined by their constituent traits. The main QTL identified in this study for determining kernel-related traits with high broad-sense heritability may play important roles during kernel development. Furthermore, digenic interactions were shown to exert relatively large effects on KL (the highest AA and DD effects were 4.6% and 6.7%, respectively) and KT (the highest AA effects were 4.3%). PMID:24586932

  13. Prediction of pH of cola beverage using Vis/NIR spectroscopy and least squares-support vector machine

    NASA Astrophysics Data System (ADS)

    Liu, Fei; He, Yong

    2008-02-01

    Visible and near infrared (Vis/NIR) transmission spectroscopy and chemometric methods were utilized to predict the pH values of cola beverages. Five varieties of cola were prepared and 225 samples (45 samples for each variety) were selected for the calibration set, while 75 samples (15 samples for each variety) for the validation set. The smoothing way of Savitzky-Golay and standard normal variate (SNV) followed by first-derivative were used as the pre-processing methods. Partial least squares (PLS) analysis was employed to extract the principal components (PCs) which were used as the inputs of least squares-support vector machine (LS-SVM) model according to their accumulative reliabilities. Then LS-SVM with radial basis function (RBF) kernel function and a two-step grid search technique were applied to build the regression model with a comparison of PLS regression. The correlation coefficient (r), root mean square error of prediction (RMSEP) and bias were 0.961, 0.040 and 0.012 for PLS, while 0.975, 0.031 and 4.697x10 -3 for LS-SVM, respectively. Both methods obtained a satisfying precision. The results indicated that Vis/NIR spectroscopy combined with chemometric methods could be applied as an alternative way for the prediction of pH of cola beverages.

  14. Tuning to optimize SVM approach for assisting ovarian cancer diagnosis with photoacoustic imaging.

    PubMed

    Wang, Rui; Li, Rui; Lei, Yanyan; Zhu, Quing

    2015-01-01

    Support vector machine (SVM) is one of the most effective classification methods for cancer detection. The efficiency and quality of a SVM classifier depends strongly on several important features and a set of proper parameters. Here, a series of classification analyses, with one set of photoacoustic data from ovarian tissues ex vivo and a widely used breast cancer dataset- the Wisconsin Diagnostic Breast Cancer (WDBC), revealed the different accuracy of a SVM classification in terms of the number of features used and the parameters selected. A pattern recognition system is proposed by means of SVM-Recursive Feature Elimination (RFE) with the Radial Basis Function (RBF) kernel. To improve the effectiveness and robustness of the system, an optimized tuning ensemble algorithm called as SVM-RFE(C) with correlation filter was implemented to quantify feature and parameter information based on cross validation. The proposed algorithm is first demonstrated outperforming SVM-RFE on WDBC. Then the best accuracy of 94.643% and sensitivity of 94.595% were achieved when using SVM-RFE(C) to test 57 new PAT data from 19 patients. The experiment results show that the classifier constructed with SVM-RFE(C) algorithm is able to learn additional information from new data and has significant potential in ovarian cancer diagnosis.

  15. New support vector machine-based method for microRNA target prediction.

    PubMed

    Li, L; Gao, Q; Mao, X; Cao, Y

    2014-06-09

    MicroRNA (miRNA) plays important roles in cell differentiation, proliferation, growth, mobility, and apoptosis. An accurate list of precise target genes is necessary in order to fully understand the importance of miRNAs in animal development and disease. Several computational methods have been proposed for miRNA target-gene identification. However, these methods still have limitations with respect to their sensitivity and accuracy. Thus, we developed a new miRNA target-prediction method based on the support vector machine (SVM) model. The model supplies information of two binding sites (primary and secondary) for a radial basis function kernel as a similarity measure for SVM features. The information is categorized based on structural, thermodynamic, and sequence conservation. Using high-confidence datasets selected from public miRNA target databases, we obtained a human miRNA target SVM classifier model with high performance and provided an efficient tool for human miRNA target gene identification. Experiments have shown that our method is a reliable tool for miRNA target-gene prediction, and a successful application of an SVM classifier. Compared with other methods, the method proposed here improves the sensitivity and accuracy of miRNA prediction. Its performance can be further improved by providing more training examples.

  16. Characterizing cartilage microarchitecture on phase-contrast x-ray computed tomography using deep learning with convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Deng, Botao; Abidin, Anas Z.; D'Souza, Adora M.; Nagarajan, Mahesh B.; Coan, Paola; Wismüller, Axel

    2017-03-01

    The effectiveness of phase contrast X-ray computed tomography (PCI-CT) in visualizing human patellar cartilage matrix has been demonstrated due to its ability to capture soft tissue contrast on a micrometer resolution scale. Recent studies have shown that off-the-shelf Convolutional Neural Network (CNN) features learned from a nonmedical data set can be used for medical image classification. In this paper, we investigate the ability of features extracted from two different CNNs for characterizing chondrocyte patterns in the cartilage matrix. We obtained features from 842 regions of interest annotated on PCI-CT images of human patellar cartilage using CaffeNet and Inception-v3 Network, which were then used in a machine learning task involving support vector machines with radial basis function kernel to classify the ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area (AUC) under the Receiver Operating Characteristic (ROC) curve. The best classification performance was observed with features from Inception-v3 network (AUC = 0.95), which outperforms features extracted from CaffeNet (AUC = 0.91). These results suggest that such characterization of chondrocyte patterns using features from internal layers of CNNs can be used to distinguish between healthy and osteoarthritic tissue with high accuracy.

  17. Visual modifications on the P300 speller BCI paradigm

    NASA Astrophysics Data System (ADS)

    Salvaris, M.; Sepulveda, F.

    2009-08-01

    The best known P300 speller brain-computer interface (BCI) paradigm is the Farwell and Donchin paradigm. In this paper, various changes to the visual aspects of this protocol are explored as well as their effects on classification. Changes to the dimensions of the symbols, the distance between the symbols and the colours used were tested. The purpose of the present work was not to achieve the highest possible accuracy results, but to ascertain whether these simple modifications to the visual protocol will provide classification differences between them and what these differences will be. Eight subjects were used, with each subject carrying out a total of six different experiments. In each experiment, the user spelt a total of 39 characters. Two types of classifiers were trained and tested to determine whether the results were classifier dependant. These were a support vector machine (SVM) with a radial basis function (RBF) kernel and Fisher's linear discriminant (FLD). The single-trial classification results and multiple-trial classification results were recorded and compared. Although no visual protocol was the best for all subjects, the best performances, across both classifiers, were obtained with the white background (WB) visual protocol. The worst performance was obtained with the small symbol size (SSS) visual protocol.

  18. Prediction of Human Intestinal Absorption of Compounds Using Artificial Intelligence Techniques.

    PubMed

    Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar

    2017-01-01

    Information about Pharmacokinetics of compounds is an essential component of drug design and development. Modeling the pharmacokinetic properties require identification of the factors effecting absorption, distribution, metabolism and excretion of compounds. There have been continuous attempts in the prediction of intestinal absorption of compounds using various Artificial intelligence methods in the effort to reduce the attrition rate of drug candidates entering to preclinical and clinical trials. Currently, there are large numbers of individual predictive models available for absorption using machine learning approaches. Six Artificial intelligence methods namely, Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis were used for prediction of absorption of compounds. Prediction accuracy of Support vector machine, k- nearest neighbor, Probabilistic neural network, Artificial neural network, Partial least square and Linear discriminant analysis for prediction of intestinal absorption of compounds was found to be 91.54%, 88.33%, 84.30%, 86.51%, 79.07% and 80.08% respectively. Comparative analysis of all the six prediction models suggested that Support vector machine with Radial basis function based kernel is comparatively better for binary classification of compounds using human intestinal absorption and may be useful at preliminary stages of drug design and development. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. Moisture Sorption Isotherms and Properties of Sorbed Water of Neem ( Azadirichta indica A. Juss) Kernels

    NASA Astrophysics Data System (ADS)

    Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.

    2017-01-01

    A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.

  20. Moisture Adsorption Isotherm and Storability of Hazelnut Inshells and Kernels Produced in Oregon, USA.

    PubMed

    Jung, Jooyeoun; Wang, Wenjie; McGorrin, Robert J; Zhao, Yanyun

    2018-02-01

    Moisture adsorption isotherms and storability of dried hazelnut inshells and kernels produced in Oregon were evaluated and compared among cultivars, including Barcelona, Yamhill, and Jefferson. Experimental moisture adsorption data fitted to Guggenheim-Anderson-de Boer (GAB) model, showing less hygroscopic properties in Yamhill than other cultivars of inshells and kernels due to lower content of carbohydrate and protein, but higher content of fat. The safe levels of moisture content (MC, dry basis) of dried inshells and kernels for reaching kernel water activity (a w ) ≤0.65 were estimated using the GAB model as 11.3% and 5.0% for Barcelona, 9.4% and 4.2% for Yamhill, and 10.7% and 4.9% for Jefferson, respectively. Storage conditions (2 °C at 85% to 95% relative humidity [RH], 10 °C at 65% to 75% RH, and 27 °C at 35% to 45% RH), times (0, 4, 8, or 12 mo), and packaging methods (atmosphere vs. vacuum) affected MC, a w , bioactive compounds, lipid oxidation, and enzyme activity of dried hazelnut inshells or kernels. For inshells packaged at woven polypropylene bag, MC and a w of inshells and kernels (inside shells) increased at 2 and 10 °C, but decreased at 27 °C during storage. For kernels, lipid oxidation and polyphenol oxidase activity also increased with extended storage time (P < 0.05), and MC and a w of vacuum packaged samples were more stable during storage than those atmospherically packaged ones. Principal component analysis showed correlation of kernel qualities with storage condition, time, and packaging method. This study demonstrated that the ideal storage condition or packaging method varied among cultivars due to their different moisture adsorption and physicochemical and enzymatic stability during storage. Moisture adsorption isotherm of hazelnut inshells and kernels is useful for predicting the storability of nuts. This study found that water adsorption and storability varied among the different cultivars of nuts, in which Yamhill was less hygroscopic than Barcelona and Jefferson, thus more stable during storage. For ensuring food safety and quality of nuts during storage, each cultivar of kernels should be dried to a certain level of MC. Lipid oxidation and enzyme activity of kernel could be increased with extended storage time. Vacuum packaging was recommended to kernels for reducing moisture adsorption during storage. © 2018 Institute of Food Technologists®.

  1. SVM-based classification of LV wall motion in cardiac MRI with the assessment of STE

    NASA Astrophysics Data System (ADS)

    Mantilla, Juan; Garreau, Mireille; Bellanger, Jean-Jacques; Paredes, José Luis

    2015-01-01

    In this paper, we propose an automated method to classify normal/abnormal wall motion in Left Ventricle (LV) function in cardiac cine-Magnetic Resonance Imaging (MRI), taking as reference, strain information obtained from 2D Speckle Tracking Echocardiography (STE). Without the need of pre-processing and by exploiting all the images acquired during a cardiac cycle, spatio-temporal profiles are extracted from a subset of radial lines from the ventricle centroid to points outside the epicardial border. Classical Support Vector Machines (SVM) are used to classify features extracted from gray levels of the spatio-temporal profile as well as their representations in the Wavelet domain under the assumption that the data may be sparse in that domain. Based on information obtained from radial strain curves in 2D-STE studies, we label all the spatio-temporal profiles that belong to a particular segment as normal if the peak systolic radial strain curve of this segment presents normal kinesis, or abnormal if the peak systolic radial strain curve presents hypokinesis or akinesis. For this study, short-axis cine- MR images are collected from 9 patients with cardiac dyssynchrony for which we have the radial strain tracings at the mid-papilary muscle obtained by 2D STE; and from one control group formed by 9 healthy subjects. The best classification performance is obtained with the gray level information of the spatio-temporal profiles using a RBF kernel with 91.88% of accuracy, 92.75% of sensitivity and 91.52% of specificity.

  2. Optimal Space Station solar array gimbal angle determination via radial basis function neural networks

    NASA Technical Reports Server (NTRS)

    Clancy, Daniel J.; Oezguener, Uemit; Graham, Ronald E.

    1994-01-01

    The potential for excessive plume impingement loads on Space Station Freedom solar arrays, caused by jet firings from an approaching Space Shuttle, is addressed. An artificial neural network is designed to determine commanded solar array beta gimbal angle for minimum plume loads. The commanded angle would be determined dynamically. The network design proposed involves radial basis functions as activation functions. Design, development, and simulation of this network design are discussed.

  3. Finite-frequency tomography using adjoint methods-Methodology and examples using membrane surface waves

    NASA Astrophysics Data System (ADS)

    Tape, Carl; Liu, Qinya; Tromp, Jeroen

    2007-03-01

    We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.

  4. Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system

    NASA Astrophysics Data System (ADS)

    Wu, Qi

    2010-03-01

    Demand forecasts play a crucial role in supply chain management. The future demand for a certain product is the basis for the respective replenishment systems. Aiming at demand series with small samples, seasonal character, nonlinearity, randomicity and fuzziness, the existing support vector kernel does not approach the random curve of the sales time series in the space (quadratic continuous integral space). In this paper, we present a hybrid intelligent system combining the wavelet kernel support vector machine and particle swarm optimization for demand forecasting. The results of application in car sale series forecasting show that the forecasting approach based on the hybrid PSOWv-SVM model is effective and feasible, the comparison between the method proposed in this paper and other ones is also given, which proves that this method is, for the discussed example, better than hybrid PSOv-SVM and other traditional methods.

  5. The Vortex of Burgers in Protoplanetary Disc

    NASA Astrophysics Data System (ADS)

    Abrahamyan, M. G.

    2017-07-01

    The effect of a Burgers vortex on formation of planetesimals in a protoplanetary disc in local approach is considered. It is shown that there is not any circular orbit for rigid particles in centrifugal balance; only stable position in Burgers vortex under the influence of centrifugal, Coriolis, pressure gradient and Stokes drag forces is the center of vortex. The two-dimensional anticyclonic Burgers vortex with homogeneously rotating kernel and a converging radial stream of substance can effectively accumulate in its nuclear area the meter- sized rigid particles of total mass ˜1028g for characteristic time ˜106yr.

  6. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  7. Scaling a Human Body Finite Element Model with Radial Basis Function Interpolation

    DTIC Science & Technology

    Human body models are currently used to evaluate the body’s response to a variety of threats to the Soldier. The ability to adjust the size of human...body models is currently limited because of the complex shape changes that are required. Here, a radial basis function interpolation method is used to...morph the shape on an existing finite element mesh. Tools are developed and integrated into the Blender computer graphics software to assist with

  8. Radial basis function and its application in tourism management

    NASA Astrophysics Data System (ADS)

    Hu, Shan-Feng; Zhu, Hong-Bin; Zhao, Lei

    2018-05-01

    In this work, several applications and the performances of the radial basis function (RBF) are briefly reviewed at first. After that, the binomial function combined with three different RBFs including the multiquadric (MQ), inverse quadric (IQ) and inverse multiquadric (IMQ) distributions are adopted to model the tourism data of Huangshan in China. Simulation results showed that all the models match very well with the sample data. It is found that among the three models, the IMQ-RBF model is more suitable for forecasting the tourist flow.

  9. Effects of turbulence on the collision rate of cloud droplets

    NASA Astrophysics Data System (ADS)

    Ayala, Orlando

    This dissertation concerns effects of air turbulence on the collision rate of atmospheric cloud droplets. This research was motivated by the speculation that air turbulence could enhance the collision rate thereby help transform cloud droplets to rain droplets in a short time as observed in nature. The air turbulence within clouds is assumed to be homogeneous and isotropic, and its small-scale motion (1 mm to 10 cm scales) is computationally generated by direct numerical integration of the full Navier-Stokes equations. Typical droplet and turbulence parameters of convective warm clouds are used to determine the Stokes numbers (St) and the nondimensional terminal velocities (Sv) which characterize droplet relative inertia and gravitational settling, respectively. A novel and efficient methodology for conducting direct numerical simulations (DNS) of hydrodynamically-interacting droplets in the context of cloud microphysics has been developed. This numerical approach solves the turbulent flow by the pseudo-spectral method with a large-scale forcing, and utilizes an improved superposition method to embed analytically the local, small-scale (10 mum to 1 mm) disturbance flows induced by the droplets. This hybrid representation of background turbulent air motion and the induced disturbance flows is then used to study the combined effects of hydrodynamic interactions and airflow turbulence on the motion and collisions of cloud droplets. Hybrid DNS results show that turbulence can increase the geometric collision kernel relative to the gravitational geometric kernel by as much as 42% due to enhanced radial relative motion and preferential concentration of droplets. The exact level of enhancements depends on the Taylor-microscale Reynolds number, turbulent dissipation rate, and droplet pair size ratio. One important finding is that turbulence has a relatively dominant effect on the collision process between droplets close in size as the gravitational collision mechanism diminishes. A theory was developed to predict the radial relative velocity between droplets at contact. The theory agrees with our DNS results to within 5% for cloud droplets with strong settling. In addition, an empirical model is developed to quantify the radial distribution function. (Abstract shortened by UMI.)

  10. Reduced multiple empirical kernel learning machine.

    PubMed

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3) this paper adopts the Gauss Elimination, one of the on-the-shelf techniques, to generate a basis of the original feature space, which is stable and efficient.

  11. Acquisition of STEM Images by Adaptive Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less

  12. Computing single step operators of logic programming in radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    2014-07-01

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (Tp:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.

  13. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    NASA Astrophysics Data System (ADS)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  14. Evaluating and interpreting the chemical relevance of the linear response kernel for atoms II: open shell.

    PubMed

    Boisdenghien, Zino; Fias, Stijn; Van Alsenoy, Christian; De Proft, Frank; Geerlings, Paul

    2014-07-28

    Most of the work done on the linear response kernel χ(r,r') has focussed on its atom-atom condensed form χAB. Our previous work [Boisdenghien et al., J. Chem. Theory Comput., 2013, 9, 1007] was the first effort to truly focus on the non-condensed form of this function for closed (sub)shell atoms in a systematic fashion. In this work, we extend our method to the open shell case. To simplify the plotting of our results, we average our results to a symmetrical quantity χ(r,r'). This allows us to plot the linear response kernel for all elements up to and including argon and to investigate the periodicity throughout the first three rows in the periodic table and in the different representations of χ(r,r'). Within the context of Spin Polarized Conceptual Density Functional Theory, the first two-dimensional plots of spin polarized linear response functions are presented and commented on for some selected cases on the basis of the atomic ground state electronic configurations. Using the relation between the linear response kernel and the polarizability we compare the values of the polarizability tensor calculated using our method to high-level values.

  15. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  16. [Classification of Children with Attention-Deficit/Hyperactivity Disorder and Typically Developing Children Based on Electroencephalogram Principal Component Analysis and k-Nearest Neighbor].

    PubMed

    Yang, Jiaojiao; Guo, Qian; Li, Wenjie; Wang, Suhong; Zou, Ling

    2016-04-01

    This paper aims to assist the individual clinical diagnosis of children with attention-deficit/hyperactivity disorder using electroencephalogram signal detection method.Firstly,in our experiments,we obtained and studied the electroencephalogram signals from fourteen attention-deficit/hyperactivity disorder children and sixteen typically developing children during the classic interference control task of Simon-spatial Stroop,and we completed electroencephalogram data preprocessing including filtering,segmentation,removal of artifacts and so on.Secondly,we selected the subset electroencephalogram electrodes using principal component analysis(PCA)method,and we collected the common channels of the optimal electrodes which occurrence rates were more than 90%in each kind of stimulation.We then extracted the latency(200~450ms)mean amplitude features of the common electrodes.Finally,we used the k-nearest neighbor(KNN)classifier based on Euclidean distance and the support vector machine(SVM)classifier based on radial basis kernel function to classify.From the experiment,at the same kind of interference control task,the attention-deficit/hyperactivity disorder children showed lower correct response rates and longer reaction time.The N2 emerged in prefrontal cortex while P2 presented in the inferior parietal area when all kinds of stimuli demonstrated.Meanwhile,the children with attention-deficit/hyperactivity disorder exhibited markedly reduced N2 and P2amplitude compared to typically developing children.KNN resulted in better classification accuracy than SVM classifier,and the best classification rate was 89.29%in StI task.The results showed that the electroencephalogram signals were different in the brain regions of prefrontal cortex and inferior parietal cortex between attention-deficit/hyperactivity disorder and typically developing children during the interference control task,which provided a scientific basis for the clinical diagnosis of attention-deficit/hyperactivity disorder individuals.

  17. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  18. Genome-Wide Association Study Identifies Candidate Genes for Starch Content Regulation in Maize Kernels

    PubMed Central

    Liu, Na; Xue, Yadong; Guo, Zhanyong; Li, Weihua; Tang, Jihua

    2016-01-01

    Kernel starch content is an important trait in maize (Zea mays L.) as it accounts for 65–75% of the dry kernel weight and positively correlates with seed yield. A number of starch synthesis-related genes have been identified in maize in recent years. However, many loci underlying variation in starch content among maize inbred lines still remain to be identified. The current study is a genome-wide association study that used a set of 263 maize inbred lines. In this panel, the average kernel starch content was 66.99%, ranging from 60.60 to 71.58% over the three study years. These inbred lines were genotyped with the SNP50 BeadChip maize array, which is comprised of 56,110 evenly spaced, random SNPs. Population structure was controlled by a mixed linear model (MLM) as implemented in the software package TASSEL. After the statistical analyses, four SNPs were identified as significantly associated with starch content (P ≤ 0.0001), among which one each are located on chromosomes 1 and 5 and two are on chromosome 2. Furthermore, 77 candidate genes associated with starch synthesis were found within the 100-kb intervals containing these four QTLs, and four highly associated genes were within 20-kb intervals of the associated SNPs. Among the four genes, Glucose-1-phosphate adenylyltransferase (APS1; Gene ID GRMZM2G163437) is known as an important regulator of kernel starch content. The identified SNPs, QTLs, and candidate genes may not only be readily used for germplasm improvement by marker-assisted selection in breeding, but can also elucidate the genetic basis of starch content. Further studies on these identified candidate genes may help determine the molecular mechanisms regulating kernel starch content in maize and other important cereal crops. PMID:27512395

  19. Patient-specific quantification of image quality: An automated method for measuring spatial resolution in clinical CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanders, Jeremiah, E-mail: jeremiah.sanders@duke.e

    Purpose: To develop and validate an automated technique for evaluating the spatial resolution characteristics of clinical computed tomography (CT) images. Methods: Twenty one chest and abdominopelvic clinical CT datasets were examined in this study. An algorithm was developed to extract a CT resolution index (RI) analogous to the modulation transfer function from clinical CT images by measuring the edge-spread function (ESF) across the patient’s skin. A polygon mesh of the air-skin boundary was created. The faces of the mesh were then used to measure the ESF across the air-skin interface. The ESF was differentiated to obtain the line-spread function (LSF),more » and the LSF was Fourier transformed to obtain the RI. The algorithm’s ability to detect the radial dependence of the RI was investigated. RIs measured with the proposed method were compared with a conventional phantom-based method across two reconstruction algorithms (FBP and iterative) using the spatial frequency at 50% RI, f{sub 50}, as the metric for comparison. Three reconstruction kernels were investigated for each reconstruction algorithm. Finally, an observer study was conducted to determine if observers could visually perceive the differences in the measured blurriness of images reconstructed with a given reconstruction method. Results: RI measurements performed with the proposed technique exhibited the expected dependencies on the image reconstruction. The measured f{sub 50} values increased with harder kernels for both FBP and iterative reconstruction. Furthermore, the proposed algorithm was able to detect the radial dependence of the RI. Patient-specific measurements of the RI were comparable to the phantom-based technique, but the patient data exhibited a large spread in the measured f{sub 50}, indicating that some datasets were blurrier than others even when the projection data were reconstructed with the same reconstruction algorithm and kernel. Results from the observer study substantiated this finding. Conclusions: Clinically informed, patient-specific spatial resolution can be measured from clinical datasets. The method is sufficiently sensitive to reflect changes in spatial resolution due to different reconstruction parameters. The method can be applied to automatically assess the spatial resolution of patient images and quantify dependencies that may not be captured in phantom data.« less

  20. Neutron dose rate analysis on HTGR-10 reactor using Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Suwoto; Adrial, H.; Hamzah, A.; Zuhair; Bakhri, S.; Sunaryo, G. R.

    2018-02-01

    The HTGR-10 reactor is cylinder-shaped core fuelled with kernel TRISO coated fuel particles in the spherical pebble with helium cooling system. The outlet helium gas coolant temperature outputted from the reactor core is designed to 700 °C. One advantage HTGR type reactor is capable of co-generation, as an addition to generating electricity, the reactor was designed to produce heat at high temperature can be used for other processes. The spherical fuel pebble contains 8335 TRISO UO2 kernel coated particles with enrichment of 10% and 17% are dispersed in a graphite matrix. The main purpose of this study was to analysis the distribution of neutron dose rates generated from HTGR-10 reactors. The calculation and analysis result of neutron dose rate in the HTGR-10 reactor core was performed using Monte Carlo MCNP5v1.6 code. The problems of double heterogeneity in kernel fuel coated particles TRISO and spherical fuel pebble in the HTGR-10 core are modelled well with MCNP5v1.6 code. The neutron flux to dose conversion factors taken from the International Commission on Radiological Protection (ICRP-74) was used to determine the dose rate that passes through the active core, reflectors, core barrel, reactor pressure vessel (RPV) and a biological shield. The calculated results of neutron dose rate with MCNP5v1.6 code using a conversion factor of ICRP-74 (2009) for radiation workers in the radial direction on the outside of the RPV (radial position = 220 cm from the center of the patio HTGR-10) provides the respective value of 9.22E-4 μSv/h and 9.58E-4 μSv/h for enrichment 10% and 17%, respectively. The calculated values of neutron dose rates are compliant with BAPETEN Chairman’s Regulation Number 4 Year 2013 on Radiation Protection and Safety in Nuclear Energy Utilization which sets the limit value for the average effective dose for radiation workers 20 mSv/year or 10μSv/h. Thus the protection and safety for radiation workers to be safe from the radiation source has been fulfilled. From the result analysis, it can be concluded that the model of calculation result of neutron dose rate for HTGR-10 core has met the required radiation safety standards.

  1. Practicable group testing method to evaluate weight/weight GMO content in maize grains.

    PubMed

    Mano, Junichi; Yanaka, Yuka; Ikezu, Yoko; Onishi, Mari; Futo, Satoshi; Minegishi, Yasutaka; Ninomiya, Kenji; Yotsuyanagi, Yuichi; Spiegelhalter, Frank; Akiyama, Hiroshi; Teshima, Reiko; Hino, Akihiro; Naito, Shigehiro; Koiwa, Tomohiro; Takabatake, Reona; Furui, Satoshi; Kitta, Kazumi

    2011-07-13

    Because of the increasing use of maize hybrids with genetically modified (GM) stacked events, the established and commonly used bulk sample methods for PCR quantification of GM maize in non-GM maize are prone to overestimate the GM organism (GMO) content, compared to the actual weight/weight percentage of GM maize in the grain sample. As an alternative method, we designed and assessed a group testing strategy in which the GMO content is statistically evaluated based on qualitative analyses of multiple small pools, consisting of 20 maize kernels each. This approach enables the GMO content evaluation on a weight/weight basis, irrespective of the presence of stacked-event kernels. To enhance the method's user-friendliness in routine application, we devised an easy-to-use PCR-based qualitative analytical method comprising a sample preparation step in which 20 maize kernels are ground in a lysis buffer and a subsequent PCR assay in which the lysate is directly used as a DNA template. This method was validated in a multilaboratory collaborative trial.

  2. Cytokinin signaling regulates cambial development in poplar

    PubMed Central

    Nieminen, Kaisa; Immanen, Juha; Laxell, Marjukka; Kauppinen, Leila; Tarkowski, Petr; Dolezal, Karel; Tähtiharju, Sari; Elo, Annakaisa; Decourteix, Mélanie; Ljung, Karin; Bhalerao, Rishikesh; Keinonen, Kaija; Albert, Victor A.; Helariutta, Ykä

    2008-01-01

    Although a substantial proportion of plant biomass originates from the activity of vascular cambium, the molecular basis of radial plant growth is still largely unknown. To address whether cytokinins are required for cambial activity, we studied cytokinin signaling across the cambial zones of 2 tree species, poplar (Populus trichocarpa) and birch (Betula pendula). We observed an expression peak for genes encoding cytokinin receptors in the dividing cambial cells. We reduced cytokinin levels endogenously by engineering transgenic poplar trees (P. tremula × tremuloides) to express a cytokinin catabolic gene, Arabidopsis CYTOKININ OXIDASE 2, under the promoter of a birch CYTOKININ RECEPTOR 1 gene. Transgenic trees showed reduced concentration of a biologically active cytokinin, correlating with impaired cytokinin responsiveness. In these trees, both apical and radial growth was compromised. However, radial growth was more affected, as illustrated by a thinner stem diameter than in WT at same height. To dissect radial from apical growth inhibition, we performed a reciprocal grafting experiment. WT scion outgrew the diameter of transgenic stock, implicating cytokinin activity as a direct determinant of radial growth. The reduced radial growth correlated with a reduced number of cambial cell layers. Moreover, expression of a cytokinin primary response gene was dramatically reduced in the thin-stemmed transgenic trees. Thus, a reduced level of cytokinin signaling is the primary basis for the impaired cambial growth observed. Together, our results show that cytokinins are major hormonal regulators required for cambial development. PMID:19064928

  3. Radial Coherence of Diffusion Tractography in the Cerebral White Matter of the Human Fetus: Neuroanatomic Insights

    PubMed Central

    Xu, Gang; Takahashi, Emi; Folkerth, Rebecca D.; Haynes, Robin L.; Volpe, Joseph J.; Grant, P. Ellen; Kinney, Hannah C.

    2014-01-01

    High angular resolution diffusion imaging (HARDI) demonstrates transient radial coherence of telencephalic white matter in the human fetus. Our objective was to define the neuroanatomic basis of this radial coherence through correlative HARDI- and postmortem tissue analyses. Applying immunomarkers to radial glial fibers (RGFs), axons, and blood vessels in 18 cases (19 gestational weeks to 3 postnatal years), we compared their developmental profiles to HARDI tractography in brains of comparable ages (n = 11). At midgestation, radial coherence corresponded with the presence of RGFs. At 30–31 weeks, the transition from HARDI-defined radial coherence to corticocortical coherence began simultaneously with the transformation of RGFs to astrocytes. By term, both radial coherence and RGFs had disappeared. White matter axons were radial, tangential, and oblique over the second half of gestation, whereas penetrating blood vessels were consistently radial. Thus, radial coherence in the fetal white matter likely reflects a composite of RGFs, penetrating blood vessels, and radial axons of which its transient expression most closely matches that of RGFs. This study provides baseline information for interpreting radial coherence in tractography studies of the preterm brain in the assessment of the encephalopathy of prematurity. PMID:23131806

  4. Optimization of Turbine Blade Design for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1998-01-01

    To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.

  5. Device-Free Passive Identity Identification via WiFi Signals.

    PubMed

    Lv, Jiguang; Yang, Wu; Man, Dapeng

    2017-11-02

    Device-free passive identity identification attracts much attention in recent years, and it is a representative application in sensorless sensing. It can be used in many applications such as intrusion detection and smart building. Previous studies show the sensing potential of WiFi signals in a device-free passive manner. It is confirmed that human's gait is unique from each other similar to fingerprint and iris. However, the identification accuracy of existing approaches is not satisfactory in practice. In this paper, we present Wii, a device-free WiFi-based Identity Identification approach utilizing human's gait based on Channel State Information (CSI) of WiFi signals. Principle Component Analysis (PCA) and low pass filter are applied to remove the noises in the signals. We then extract several entities' gait features from both time and frequency domain, and select the most effective features according to information gain. Based on these features, Wii realizes stranger recognition through Gaussian Mixture Model (GMM) and identity identification through a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. It is implemented using commercial WiFi devices and evaluated on a dataset with more than 1500 gait instances collected from eight subjects walking in a room. The results indicate that Wii can effectively recognize strangers and can achieves high identification accuracy with low computational cost. As a result, Wii has the potential to work in typical home security systems.

  6. Classification of high-resolution multispectral satellite remote sensing images using extended morphological attribute profiles and independent component analysis

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Zheng, Lijuan; Xie, Donghai; Zhong, Ruofei

    2017-07-01

    In this study, the extended morphological attribute profiles (EAPs) and independent component analysis (ICA) were combined for feature extraction of high-resolution multispectral satellite remote sensing images and the regularized least squares (RLS) approach with the radial basis function (RBF) kernel was further applied for the classification. Based on the major two independent components, the geometrical features were extracted using the EAPs method. In this study, three morphological attributes were calculated and extracted for each independent component, including area, standard deviation, and moment of inertia. The extracted geometrical features classified results using RLS approach and the commonly used LIB-SVM library of support vector machines method. The Worldview-3 and Chinese GF-2 multispectral images were tested, and the results showed that the features extracted by EAPs and ICA can effectively improve the accuracy of the high-resolution multispectral image classification, 2% larger than EAPs and principal component analysis (PCA) method, and 6% larger than APs and original high-resolution multispectral data. Moreover, it is also suggested that both the GURLS and LIB-SVM libraries are well suited for the multispectral remote sensing image classification. The GURLS library is easy to be used with automatic parameter selection but its computation time may be larger than the LIB-SVM library. This study would be helpful for the classification application of high-resolution multispectral satellite remote sensing images.

  7. Device-Free Passive Identity Identification via WiFi Signals

    PubMed Central

    Yang, Wu; Man, Dapeng

    2017-01-01

    Device-free passive identity identification attracts much attention in recent years, and it is a representative application in sensorless sensing. It can be used in many applications such as intrusion detection and smart building. Previous studies show the sensing potential of WiFi signals in a device-free passive manner. It is confirmed that human’s gait is unique from each other similar to fingerprint and iris. However, the identification accuracy of existing approaches is not satisfactory in practice. In this paper, we present Wii, a device-free WiFi-based Identity Identification approach utilizing human’s gait based on Channel State Information (CSI) of WiFi signals. Principle Component Analysis (PCA) and low pass filter are applied to remove the noises in the signals. We then extract several entities’ gait features from both time and frequency domain, and select the most effective features according to information gain. Based on these features, Wii realizes stranger recognition through Gaussian Mixture Model (GMM) and identity identification through a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. It is implemented using commercial WiFi devices and evaluated on a dataset with more than 1500 gait instances collected from eight subjects walking in a room. The results indicate that Wii can effectively recognize strangers and can achieves high identification accuracy with low computational cost. As a result, Wii has the potential to work in typical home security systems. PMID:29099091

  8. Feature extraction through parallel Probabilistic Principal Component Analysis for heart disease diagnosis

    NASA Astrophysics Data System (ADS)

    Shah, Syed Muhammad Saqlain; Batool, Safeera; Khan, Imran; Ashraf, Muhammad Usman; Abbas, Syed Hussnain; Hussain, Syed Adnan

    2017-09-01

    Automatic diagnosis of human diseases are mostly achieved through decision support systems. The performance of these systems is mainly dependent on the selection of the most relevant features. This becomes harder when the dataset contains missing values for the different features. Probabilistic Principal Component Analysis (PPCA) has reputation to deal with the problem of missing values of attributes. This research presents a methodology which uses the results of medical tests as input, extracts a reduced dimensional feature subset and provides diagnosis of heart disease. The proposed methodology extracts high impact features in new projection by using Probabilistic Principal Component Analysis (PPCA). PPCA extracts projection vectors which contribute in highest covariance and these projection vectors are used to reduce feature dimension. The selection of projection vectors is done through Parallel Analysis (PA). The feature subset with the reduced dimension is provided to radial basis function (RBF) kernel based Support Vector Machines (SVM). The RBF based SVM serves the purpose of classification into two categories i.e., Heart Patient (HP) and Normal Subject (NS). The proposed methodology is evaluated through accuracy, specificity and sensitivity over the three datasets of UCI i.e., Cleveland, Switzerland and Hungarian. The statistical results achieved through the proposed technique are presented in comparison to the existing research showing its impact. The proposed technique achieved an accuracy of 82.18%, 85.82% and 91.30% for Cleveland, Hungarian and Switzerland dataset respectively.

  9. Epileptic seizure detection in EEG signal using machine learning techniques.

    PubMed

    Jaiswal, Abeg Kumar; Banka, Haider

    2018-03-01

    Epilepsy is a well-known nervous system disorder characterized by seizures. Electroencephalograms (EEGs), which capture brain neural activity, can detect epilepsy. Traditional methods for analyzing an EEG signal for epileptic seizure detection are time-consuming. Recently, several automated seizure detection frameworks using machine learning technique have been proposed to replace these traditional methods. The two basic steps involved in machine learning are feature extraction and classification. Feature extraction reduces the input pattern space by keeping informative features and the classifier assigns the appropriate class label. In this paper, we propose two effective approaches involving subpattern based PCA (SpPCA) and cross-subpattern correlation-based PCA (SubXPCA) with Support Vector Machine (SVM) for automated seizure detection in EEG signals. Feature extraction was performed using SpPCA and SubXPCA. Both techniques explore the subpattern correlation of EEG signals, which helps in decision-making process. SVM is used for classification of seizure and non-seizure EEG signals. The SVM was trained with radial basis kernel. All the experiments have been carried out on the benchmark epilepsy EEG dataset. The entire dataset consists of 500 EEG signals recorded under different scenarios. Seven different experimental cases for classification have been conducted. The classification accuracy was evaluated using tenfold cross validation. The classification results of the proposed approaches have been compared with the results of some of existing techniques proposed in the literature to establish the claim.

  10. A practical radial basis function equalizer.

    PubMed

    Lee, J; Beach, C; Tepedelenlioglu, N

    1999-01-01

    A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.

  11. Prediction of forced expiratory volume in pulmonary function test using radial basis neural networks and k-means clustering.

    PubMed

    Manoharan, Sujatha C; Ramakrishnan, Swaminathan

    2009-10-01

    In this work, prediction of forced expiratory volume in pulmonary function test, carried out using spirometry and neural networks is presented. The pulmonary function data were recorded from volunteers using commercial available flow volume spirometer in standard acquisition protocol. The Radial Basis Function neural networks were used to predict forced expiratory volume in 1 s (FEV1) from the recorded flow volume curves. The optimal centres of the hidden layer of radial basis function were determined by k-means clustering algorithm. The performance of the neural network model was evaluated by computing their prediction error statistics of average value, standard deviation, root mean square and their correlation with the true data for normal, restrictive and obstructive cases. Results show that the adopted neural networks are capable of predicting FEV1 in both normal and abnormal cases. Prediction accuracy was more in obstructive abnormality when compared to restrictive cases. It appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.

  12. Combustion monitoring of a water tube boiler using a discriminant radial basis network.

    PubMed

    Sujatha, K; Pappa, N

    2011-01-01

    This research work includes a combination of Fisher's linear discriminant (FLD) analysis and a radial basis network (RBN) for monitoring the combustion conditions for a coal fired boiler so as to allow control of the air/fuel ratio. For this, two-dimensional flame images are required, which were captured with a CCD camera; the features of the images-average intensity, area, brightness and orientation etc of the flame-are extracted after preprocessing the images. The FLD is applied to reduce the n-dimensional feature size to a two-dimensional feature size for faster learning of the RBN. Also, three classes of images corresponding to different burning conditions of the flames have been extracted from continuous video processing. In this, the corresponding temperatures, and the carbon monoxide (CO) emissions and those of other flue gases have been obtained through measurement. Further, the training and testing of Fisher's linear discriminant radial basis network (FLDRBN), with the data collected, have been carried out and the performance of the algorithms is presented. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  13. SU-F-SPS-06: Implementation of a Back-Projection Algorithm for 2D in Vivo Dosimetry with An EPID System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez Reyes, B; Rodriguez Perez, E; Sosa Aquino, M

    Purpose: To implement a back-projection algorithm for 2D dose reconstructions for in vivo dosimetry in radiation therapy using an Electronic Portal Imaging Device (EPID) based on amorphous silicon. Methods: An EPID system was used to calculate dose-response function, pixel sensitivity map, exponential scatter kernels and beam hardenig correction for the back-projection algorithm. All measurements were done with a 6 MV beam. A 2D dose reconstruction for an irradiated water phantom (30×30×30 cm{sup 3}) was done to verify the algorithm implementation. Gamma index evaluation between the 2D reconstructed dose and the calculated with a treatment planning system (TPS) was done. Results:more » A linear fit was found for the dose-response function. The pixel sensitivity map has a radial symmetry and was calculated with a profile of the pixel sensitivity variation. The parameters for the scatter kernels were determined only for a 6 MV beam. The primary dose was estimated applying the scatter kernel within EPID and scatter kernel within the patient. The beam hardening coefficient is σBH= 3.788×10{sup −4} cm{sup 2} and the effective linear attenuation coefficient is µAC= 0.06084 cm{sup −1}. The 95% of points evaluated had γ values not longer than the unity, with gamma criteria of ΔD = 3% and Δd = 3 mm, and within the 50% isodose surface. Conclusion: The use of EPID systems proved to be a fast tool for in vivo dosimetry, but the implementation is more complex that the elaborated for pre-treatment dose verification, therefore, a simplest method must be investigated. The accuracy of this method should be improved modifying the algorithm in order to compare lower isodose curves.« less

  14. A robust multi-kernel change detection framework for detecting leaf beetle defoliation using Landsat 7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim

    2016-12-01

    A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.

  15. Discussion of production logging as an integral part of horizontal-well transient-pressure test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babu, D.K.; Odeh, A.S.

    1994-09-01

    Ahmed and Badry discussed the identification of flow regimes for a horizontal well. The well produces from an infinitely extending slab-like reservoir of finite thickness. The system allows a top and bottom boundary. Reference 1 indicates the possible existence of two early radial-flow periods and illustrates them in Figures. Kuchuk et al., and Daviau give the theoretical basis for the existence of such flow regimes. The flow is essentially 2D and in vertical planes. The authors agree that a second early radial-flow period could exist from a strictly theoretical viewpoint. However, certain important physical constraints, which were not explicitly mentionedmore » in the above works, must be met before it can occur and for a reliable and valid analysis of the pressure data. The authors will show that the second early radial-flow regime could exist only if the well were extremely close to a no-flow boundary and they quantify extremely close. Hence, an engineer must use extreme caution in conducting pressure analysis on the basis of a second early radial-flow regime.« less

  16. Optimization of global model composed of radial basis functions using the term-ranking approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  17. Development of Artificial Neural Network Model for Diesel Fuel Properties Prediction using Vibrational Spectroscopy.

    PubMed

    Bolanča, Tomislav; Marinović, Slavica; Ukić, Sime; Jukić, Ante; Rukavina, Vinko

    2012-06-01

    This paper describes development of artificial neural network models which can be used to correlate and predict diesel fuel properties from several FTIR-ATR absorbances and Raman intensities as input variables. Multilayer feed forward and radial basis function neural networks have been used to rapid and simultaneous prediction of cetane number, cetane index, density, viscosity, distillation temperatures at 10% (T10), 50% (T50) and 90% (T90) recovery, contents of total aromatics and polycyclic aromatic hydrocarbons of commercial diesel fuels. In this study two-phase training procedures for multilayer feed forward networks were applied. While first phase training algorithm was constantly the back propagation one, two second phase training algorithms were varied and compared, namely: conjugate gradient and quasi Newton. In case of radial basis function network, radial layer was trained using K-means radial assignment algorithm and three different radial spread algorithms: explicit, isotropic and K-nearest neighbour. The number of hidden layer neurons and experimental data points used for the training set have been optimized for both neural networks in order to insure good predictive ability by reducing unnecessary experimental work. This work shows that developed artificial neural network models can determine main properties of diesel fuels simultaneously based on a single and fast IR or Raman measurement.

  18. Deformation Mechanisms in Tube Billets from Zr-1%Nb Alloy under Radial Forging

    NASA Astrophysics Data System (ADS)

    Perlovich, Yuriy; Isaenkova, Margarita; Fesenko, Vladimir; Krymskaya, Olga; Zavodchikov, Alexander

    2011-05-01

    Features of the deformation process by cold radial forging of tube billets from Zr-1%Nb alloy were reconstructed on the basis of X-ray data concerning their structure and texture. The cold radial forging intensifies grain fragmentation in the bulk of billet and increases significantly the latent hardening of potentially active slip systems, so that operation only of the single slip system becomes possible. As a result, in radially-forged billets unusual deformation and recrystallization textures arise. These textures differ from usual textures of α-Zr by the mutual inversion of crystallographic axes, aligned along the axis of tube.

  19. Zernike Basis to Cartesian Transformations

    NASA Astrophysics Data System (ADS)

    Mathar, R. J.

    2009-12-01

    The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.

  20. On removing interpolation and resampling artifacts in rigid image registration.

    PubMed

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  1. On Removing Interpolation and Resampling Artifacts in Rigid Image Registration

    PubMed Central

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce

    2013-01-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044

  2. Gnuastro: GNU Astronomy Utilities

    NASA Astrophysics Data System (ADS)

    Akhlaghi, Mohammad

    2018-01-01

    Gnuastro (GNU Astronomy Utilities) manipulates and analyzes astronomical data. It is an official GNU package of a large collection of programs and C/C++ library functions. Command-line programs perform arithmetic operations on images, convert FITS images to common types like JPG or PDF, convolve an image with a given kernel or matching of kernels, perform cosmological calculations, crop parts of large images (possibly in multiple files), manipulate FITS extensions and keywords, and perform statistical operations. In addition, it contains programs to make catalogs from detection maps, add noise, make mock profiles with a variety of radial functions using monte-carlo integration for their centers, match catalogs, and detect objects in an image among many other operations. The command-line programs share the same basic command-line user interface for the comfort of both the users and developers. Gnuastro is written to comply fully with the GNU coding standards and integrates well with all Unix-like operating systems. This enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use. Gnuastro's extensive library is included for users who want to build their own unique programs.

  3. An integrated method for atherosclerotic carotid plaque segmentation in ultrasound image.

    PubMed

    Qian, Chunjun; Yang, Xiaoping

    2018-01-01

    Carotid artery atherosclerosis is an important cause of stroke. Ultrasound imaging has been widely used in the diagnosis of atherosclerosis. Therefore, segmenting atherosclerotic carotid plaque in ultrasound image is an important task. Accurate plaque segmentation is helpful for the measurement of carotid plaque burden. In this paper, we propose and evaluate a novel learning-based integrated framework for plaque segmentation. In our study, four different classification algorithms, along with the auto-context iterative algorithm, were employed to effectively integrate features from ultrasound images and later also the iteratively estimated and refined probability maps together for pixel-wise classification. The four classification algorithms were support vector machine with linear kernel, support vector machine with radial basis function kernel, AdaBoost and random forest. The plaque segmentation was implemented in the generated probability map. The performance of the four different learning-based plaque segmentation methods was tested on 29 B-mode ultrasound images. The evaluation indices for our proposed methods were consisted of sensitivity, specificity, Dice similarity coefficient, overlap index, error of area, absolute error of area, point-to-point distance, and Hausdorff point-to-point distance, along with the area under the ROC curve. The segmentation method integrated the random forest and an auto-context model obtained the best results (sensitivity 80.4 ± 8.4%, specificity 96.5 ± 2.0%, Dice similarity coefficient 81.0 ± 4.1%, overlap index 68.3 ± 5.8%, error of area -1.02 ± 18.3%, absolute error of area 14.7 ± 10.9%, point-to-point distance 0.34 ± 0.10 mm, Hausdorff point-to-point distance 1.75 ± 1.02 mm, and area under the ROC curve 0.897), which were almost the best, compared with that from the existed methods. Our proposed learning-based integrated framework investigated in this study could be useful for atherosclerotic carotid plaque segmentation, which will be helpful for the measurement of carotid plaque burden. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Deep-learning: investigating deep neural networks hyper-parameters and comparison of performance to shallow methods for modeling bioactivity data.

    PubMed

    Koutsoukas, Alexios; Monaghan, Keith J; Li, Xiaoli; Huan, Jun

    2017-06-28

    In recent years, research in artificial neural networks has resurged, now under the deep-learning umbrella, and grown extremely popular. Recently reported success of DL techniques in crowd-sourced QSAR and predictive toxicology competitions has showcased these methods as powerful tools in drug-discovery and toxicology research. The aim of this work was dual, first large number of hyper-parameter configurations were explored to investigate how they affect the performance of DNNs and could act as starting points when tuning DNNs and second their performance was compared to popular methods widely employed in the field of cheminformatics namely Naïve Bayes, k-nearest neighbor, random forest and support vector machines. Moreover, robustness of machine learning methods to different levels of artificially introduced noise was assessed. The open-source Caffe deep-learning framework and modern NVidia GPU units were utilized to carry out this study, allowing large number of DNN configurations to be explored. We show that feed-forward deep neural networks are capable of achieving strong classification performance and outperform shallow methods across diverse activity classes when optimized. Hyper-parameters that were found to play critical role are the activation function, dropout regularization, number hidden layers and number of neurons. When compared to the rest methods, tuned DNNs were found to statistically outperform, with p value <0.01 based on Wilcoxon statistical test. DNN achieved on average MCC units of 0.149 higher than NB, 0.092 than kNN, 0.052 than SVM with linear kernel, 0.021 than RF and finally 0.009 higher than SVM with radial basis function kernel. When exploring robustness to noise, non-linear methods were found to perform well when dealing with low levels of noise, lower than or equal to 20%, however when dealing with higher levels of noise, higher than 30%, the Naïve Bayes method was found to perform well and even outperform at the highest level of noise 50% more sophisticated methods across several datasets.

  5. The Chimera II Real-Time Operating System for advanced sensor-based control applications

    NASA Technical Reports Server (NTRS)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1992-01-01

    Attention is given to the Chimera II Real-Time Operating System, which has been developed for advanced sensor-based control applications. The Chimera II provides a high-performance real-time kernel and a variety of IPC features. The hardware platform required to run Chimera II consists of commercially available hardware, and allows custom hardware to be easily integrated. The design allows it to be used with almost any type of VMEbus-based processors and devices. It allows radially differing hardware to be programmed using a common system, thus providing a first and necessary step towards the standardization of reconfigurable systems that results in a reduction of development time and cost.

  6. Parallel Fixed Point Implementation of a Radial Basis Function Network in an FPGA

    PubMed Central

    de Souza, Alisson C. D.; Fernandes, Marcelo A. C.

    2014-01-01

    This paper proposes a parallel fixed point radial basis function (RBF) artificial neural network (ANN), implemented in a field programmable gate array (FPGA) trained online with a least mean square (LMS) algorithm. The processing time and occupied area were analyzed for various fixed point formats. The problems of precision of the ANN response for nonlinear classification using the XOR gate and interpolation using the sine function were also analyzed in a hardware implementation. The entire project was developed using the System Generator platform (Xilinx), with a Virtex-6 xc6vcx240t-1ff1156 as the target FPGA. PMID:25268918

  7. A meshless method using radial basis functions for numerical solution of the two-dimensional KdV-Burgers equation

    NASA Astrophysics Data System (ADS)

    Zabihi, F.; Saffarian, M.

    2016-07-01

    The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.

  8. Stock market index prediction using neural networks

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok

    1994-03-01

    A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.

  9. Novel two-way artificial boundary condition for 2D vertical water wave propagation modelled with Radial-Basis-Function Collocation Method

    NASA Astrophysics Data System (ADS)

    Mueller, A.

    2018-04-01

    A new transparent artificial boundary condition for the two-dimensional (vertical) (2DV) free surface water wave propagation modelled using the meshless Radial-Basis-Function Collocation Method (RBFCM) as boundary-only solution is derived. The two-way artificial boundary condition (2wABC) works as pure incidence, pure radiation and as combined incidence/radiation BC. In this work the 2wABC is applied to harmonic linear water waves; its performance is tested against the analytical solution for wave propagation over horizontal sea bottom, standing and partially standing wave as well as wave interference of waves with different periods.

  10. Defense Responses to Mycotoxin-Producing Fungi Fusarium proliferatum, F. subglutinans, and Aspergillus flavus in Kernels of Susceptible and Resistant Maize Genotypes.

    PubMed

    Lanubile, Alessandra; Maschietto, Valentina; De Leonardis, Silvana; Battilani, Paola; Paciolla, Costantino; Marocco, Adriano

    2015-05-01

    Developing kernels of resistant and susceptible maize genotypes were inoculated with Fusarium proliferatum, F. subglutinans, and Aspergillus flavus. Selected defense systems were investigated using real-time reverse transcription-polymerase chain reaction to monitor the expression of pathogenesis-related (PR) genes (PR1, PR5, PRm3, PRm6) and genes protective from oxidative stress (peroxidase, catalase, superoxide dismutase and ascorbate peroxidase) at 72 h postinoculation. The study was also extended to the analysis of the ascorbate-glutathione cycle and catalase, superoxide dismutase, and cytosolic and wall peroxidases enzymes. Furthermore, the hydrogen peroxide and malondialdehyde contents were studied to evaluate the oxidation level. Higher gene expression and enzymatic activities were observed in uninoculated kernels of resistant line, conferring a major readiness to the pathogen attack. Moreover expression values of PR genes remained higher in the resistant line after inoculation, demonstrating a potentiated response to the pathogen invasions. In contrast, reactive oxygen species-scavenging genes were strongly induced in the susceptible line only after pathogen inoculation, although their enzymatic activity was higher in the resistant line. Our data provide an important basis for further investigation of defense gene functions in developing kernels in order to improve resistance to fungal pathogens. Maize genotypes with overexpressed resistance traits could be profitably utilized in breeding programs focused on resistance to pathogens and grain safety.

  11. Optical properties of alkali halide crystals from all-electron hybrid TD-DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, R., E-mail: ross.webster07@imperial.ac.uk; Harrison, N. M.; Bernasconi, L.

    2015-06-07

    We present a study of the electronic and optical properties of a series of alkali halide crystals AX, with A = Li, Na, K, Rb and X = F, Cl, Br based on a recent implementation of hybrid-exchange time-dependent density functional theory (TD-DFT) (TD-B3LYP) in the all-electron Gaussian basis set code CRYSTAL. We examine, in particular, the impact of basis set size and quality on the prediction of the optical gap and exciton binding energy. The formation of bound excitons by photoexcitation is observed in all the studied systems and this is shown to be correlated to specific features ofmore » the Hartree-Fock exchange component of the TD-DFT response kernel. All computed optical gaps and exciton binding energies are however markedly below estimated experimental and, where available, 2-particle Green’s function (GW-Bethe-Salpeter equation, GW-BSE) values. We attribute this reduced exciton binding to the incorrect asymptotics of the B3LYP exchange correlation ground state functional and of the TD-B3LYP response kernel, which lead to a large underestimation of the Coulomb interaction between the excited electron and hole wavefunctions. Considering LiF as an example, we correlate the asymptotic behaviour of the TD-B3LYP kernel to the fraction of Fock exchange admixed in the ground state functional c{sub HF} and show that there exists one value of c{sub HF} (∼0.32) that reproduces at least semi-quantitatively the optical gap of this material.« less

  12. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  13. Wavelet decomposition and radial basis function networks for system monitoring

    NASA Astrophysics Data System (ADS)

    Ikonomopoulos, A.; Endou, A.

    1998-10-01

    Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.

  14. Computed tomography arthrography using a radial plane view for the detection of triangular fibrocartilage complex foveal tears.

    PubMed

    Moritomo, Hisao; Arimitsu, Sayuri; Kubo, Nobuyuki; Masatomi, Takashi; Yukioka, Masao

    2015-02-01

    To classify triangular fibrocartilage complex (TFCC) foveal lesions on the basis of computed tomography (CT) arthrography using a radial plane view and to correlate the CT arthrography results with surgical findings. We also tested the interobserver and intra-observer reliability of the radial plane view. A total of 33 patients with a suspected TFCC foveal tear who had undergone wrist CT arthrography and subsequent surgical exploration were enrolled. We classified the configurations of TFCC foveal lesions into 5 types on the basis of CT arthrography with the radial plane view in which the image slices rotate clockwise centered on the ulnar styloid process. Sensitivity, specificity, and positive predictive values were calculated for each type of foveal lesion in CT arthrography to detect foveal tears. We determined interobserver and intra-observer agreements using kappa statistics. We also compared accuracies with the radial plane views with those with the coronal plane views. Among the tear types on CT arthrography, type 3, a roundish defect at the fovea, and type 4, a large defect at the overall ulnar insertion, had high specificity and positive predictive value for the detection of foveal tears. Specificity and positive predictive values were 90% and 89% for type 3 and 100% and 100% for type 4, respectively, whereas sensitivity was 35% for type 3 and 22% for type 4. Interobserver and intra-observer agreement was substantial and almost perfect, respectively. The radial plane view identified foveal lesion of each palmar and dorsal radioulnar ligament separately, but accuracy results with the radial plane views were not statistically different from those with the coronal plane views. Computed tomography arthrography with a radial plane view exhibited enhanced specificity and positive predictive value when a type 3 or 4 lesion was identified in the detection of a TFCC foveal tear compared with historical controls. Diagnostic II. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  15. Task-driven imaging in cone-beam computed tomography.

    PubMed

    Gang, G J; Stayman, J W; Ouadah, S; Ehtiati, T; Siewerdsen, J H

    Conventional workflow in interventional imaging often ignores a wealth of prior information of the patient anatomy and the imaging task. This work introduces a task-driven imaging framework that utilizes such information to prospectively design acquisition and reconstruction techniques for cone-beam CT (CBCT) in a manner that maximizes task-based performance in subsequent imaging procedures. The framework is employed in jointly optimizing tube current modulation, orbital tilt, and reconstruction parameters in filtered backprojection reconstruction for interventional imaging. Theoretical predictors of noise and resolution relates acquisition and reconstruction parameters to task-based detectability. Given a patient-specific prior image and specification of the imaging task, an optimization algorithm prospectively identifies the combination of imaging parameters that maximizes task-based detectability. Initial investigations were performed for a variety of imaging tasks in an elliptical phantom and an anthropomorphic head phantom. Optimization of tube current modulation and view-dependent reconstruction kernel was shown to have greatest benefits for a directional task (e.g., identification of device or tissue orientation). The task-driven approach yielded techniques in which the dose and sharp kernels were concentrated in views contributing the most to the signal power associated with the imaging task. For example, detectability of a line pair detection task was improved by at least three fold compared to conventional approaches. For radially symmetric tasks, the task-driven strategy yielded results similar to a minimum variance strategy in the absence of kernel modulation. Optimization of the orbital tilt successfully avoided highly attenuating structures that can confound the imaging task by introducing noise correlations masquerading at spatial frequencies of interest. This work demonstrated the potential of a task-driven imaging framework to improve image quality and reduce dose beyond that achievable with conventional imaging approaches.

  16. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction.

    PubMed

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2015-01-07

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.

  17. Applications of the line-of-response probability density function resolution model in PET list mode reconstruction

    PubMed Central

    Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E

    2016-01-01

    Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063

  18. Conformable derivative approach to anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Yang, S.; Zhang, S. Q.

    2018-02-01

    By using a new derivative with fractional order, referred to conformable derivative, an alternative representation of the diffusion equation is proposed to improve the modeling of anomalous diffusion. The analytical solutions of the conformable derivative model in terms of Gauss kernel and Error function are presented. The power law of the mean square displacement for the conformable diffusion model is studied invoking the time-dependent Gauss kernel. The parameters related to the conformable derivative model are determined by Levenberg-Marquardt method on the basis of the experimental data of chloride ions transportation in reinforced concrete. The data fitting results showed that the conformable derivative model agrees better with the experimental data than the normal diffusion equation. Furthermore, the potential application of the proposed conformable derivative model of water flow in low-permeability media is discussed.

  19. Structured functional additive regression in reproducing kernel Hilbert spaces.

    PubMed

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2014-06-01

    Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application.

  20. Data filtering with support vector machines in geometric camera calibration.

    PubMed

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  1. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis.

    PubMed

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. The aims were to describe how to:(i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and(ii) automatically identify the features that best distinguish the groups. The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described - simple or complex; presentation order - which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo 18 were used,which included 200 healthy Brazilians of both genders. A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods.

  2. Use of Vis/NIRS for the determination of sugar content of cola soft drinks based on chemometric methods

    NASA Astrophysics Data System (ADS)

    Liu, Fei; He, Yong

    2008-03-01

    Three different chemometric methods were performed for the determination of sugar content of cola soft drinks using visible and near infrared spectroscopy (Vis/NIRS). Four varieties of colas were prepared and 180 samples (45 samples for each variety) were selected for the calibration set, while 60 samples (15 samples for each variety) for the validation set. The smoothing way of Savitzky-Golay, standard normal variate (SNV) and Savitzky-Golay first derivative transformation were applied for the pre-processing of spectral data. The first eleven principal components (PCs) extracted by partial least squares (PLS) analysis were employed as the inputs of BP neural network (BPNN) and least squares-support vector machine (LS-SVM) model. Then the BPNN model with the optimal structural parameters and LS-SVM model with radial basis function (RBF) kernel were applied to build the regression model with a comparison of PLS regression. The correlation coefficient (r), root mean square error of prediction (RMSEP) and bias for prediction were 0.971, 1.259 and -0.335 for PLS, 0.986, 0.763, and -0.042 for BPNN, while 0.978, 0.995 and -0.227 for LS-SVM, respectively. All the three methods supplied a high and satisfying precision. The results indicated that Vis/NIR spectroscopy combined with chemometric methods could be utilized as a high precision way for the determination of sugar content of cola soft drinks.

  3. Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.

    PubMed

    Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang

    2012-12-05

    Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (P<0.05). Correlation analysis between feature importance and the statistical significance of metrics was investigated, and the results revealed a strong positive correlation between them. Overall, the current study demonstrated that major depressive disorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.

  4. Automated image segmentation using support vector machines

    NASA Astrophysics Data System (ADS)

    Powell, Stephanie; Magnotta, Vincent A.; Andreasen, Nancy C.

    2007-03-01

    Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging. Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like that being collected in several on going multi-center studies. We have previously reported on using artificial neural networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen (0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework. Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using 15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was 0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater reliability between manual raters and can be achieved without rater intervention.

  5. The identification of high potential archers based on relative psychological coping skills variables: A Support Vector Machine approach

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Muazu Musa, Rabiu; Majeed, A. P. P. Abdul; Razali Abdullah, Mohamad; Aizzat Zakaria, Muhammad; Muaz Alim, Muhammad; Arif Mat Jizat, Jessnor; Fauzi Ibrahim, Mohamad

    2018-03-01

    Support Vector Machine (SVM) has been revealed to be a powerful learning algorithm for classification and prediction. However, the use of SVM for prediction and classification in sport is at its inception. The present study classified and predicted high and low potential archers from a collection of psychological coping skills variables trained on different SVMs. 50 youth archers with the average age and standard deviation of (17.0 ±.056) gathered from various archery programmes completed a one end shooting score test. Psychological coping skills inventory which evaluates the archers level of related coping skills were filled out by the archers prior to their shooting tests. k-means cluster analysis was applied to cluster the archers based on their scores on variables assessed. SVM models, i.e. linear and fine radial basis function (RBF) kernel functions, were trained on the psychological variables. The k-means clustered the archers into high psychologically prepared archers (HPPA) and low psychologically prepared archers (LPPA), respectively. It was demonstrated that the linear SVM exhibited good accuracy and precision throughout the exercise with an accuracy of 92% and considerably fewer error rate for the prediction of the HPPA and the LPPA as compared to the fine RBF SVM. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from the selected psychological coping skills variables examined which would consequently save time and energy during talent identification and development programme.

  6. Combination of mass spectrometry-based targeted lipidomics and supervised machine learning algorithms in detecting adulterated admixtures of white rice.

    PubMed

    Lim, Dong Kyu; Long, Nguyen Phuoc; Mo, Changyeun; Dong, Ziyuan; Cui, Lingmei; Kim, Giyoung; Kwon, Sung Won

    2017-10-01

    The mixing of extraneous ingredients with original products is a common adulteration practice in food and herbal medicines. In particular, authenticity of white rice and its corresponding blended products has become a key issue in food industry. Accordingly, our current study aimed to develop and evaluate a novel discrimination method by combining targeted lipidomics with powerful supervised learning methods, and eventually introduce a platform to verify the authenticity of white rice. A total of 30 cultivars were collected, and 330 representative samples of white rice from Korea and China as well as seven mixing ratios were examined. Random forests (RF), support vector machines (SVM) with a radial basis function kernel, C5.0, model averaged neural network, and k-nearest neighbor classifiers were used for the classification. We achieved desired results, and the classifiers effectively differentiated white rice from Korea to blended samples with high prediction accuracy for the contamination ratio as low as five percent. In addition, RF and SVM classifiers were generally superior to and more robust than the other techniques. Our approach demonstrated that the relative differences in lysoGPLs can be successfully utilized to detect the adulterated mixing of white rice originating from different countries. In conclusion, the present study introduces a novel and high-throughput platform that can be applied to authenticate adulterated admixtures from original white rice samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Metabolomic prediction of yield in hybrid rice.

    PubMed

    Xu, Shizhong; Xu, Yang; Gong, Liang; Zhang, Qifa

    2016-10-01

    Rice (Oryza sativa) provides a staple food source for more than 50% of the world's population. An increase in yield can significantly contribute to global food security. Hybrid breeding can potentially help to meet this goal because hybrid rice often shows a considerable increase in yield when compared with pure-bred cultivars. We recently developed a marker-guided prediction method for hybrid yield and showed a substantial increase in yield through genomic hybrid breeding. We now have transcriptomic and metabolomic data as potential resources for prediction. Using six prediction methods, including least absolute shrinkage and selection operator (LASSO), best linear unbiased prediction (BLUP), stochastic search variable selection, partial least squares, and support vector machines using the radial basis function and polynomial kernel function, we found that the predictability of hybrid yield can be further increased using these omic data. LASSO and BLUP are the most efficient methods for yield prediction. For high heritability traits, genomic data remain the most efficient predictors. When metabolomic data are used, the predictability of hybrid yield is almost doubled compared with genomic prediction. Of the 21 945 potential hybrids derived from 210 recombinant inbred lines, selection of the top 10 hybrids predicted from metabolites would lead to a ~30% increase in yield. We hypothesize that each metabolite represents a biologically built-in genetic network for yield; thus, using metabolites for prediction is equivalent to using information integrated from these hidden genetic networks for yield prediction. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.

  8. Pathologic bone tissues in a Turkey vulture and a nonavian dinosaur: implications for interpreting endosteal bone and radial fibrolamellar bone in fossil dinosaurs.

    PubMed

    Chinsamy, Anusuya; Tumarkin-Deratzian, Allison

    2009-09-01

    We report on similar pathological bone microstructure in an extant turkey vulture (Cathartes aura) and a nonavian dinosaur from Transylvania. Both these individuals exhibit distinctive periosteal reactive bone deposition accompanied by endosteal bone deposits in the medullary cavity. Our findings have direct implications on the two novel bone tissues recently described among nonavian dinosaurs, radial fibrolamellar bone tissue and medullary bone tissue. On the basis of the observed morphology of the periosteal reactive bone in the turkey vulture and the Transylvanian dinosaur, we propose that the radial fibrolamellar bone tissues observed in mature dinosaurs may have had a pathological origin. Our analysis also shows that on the basis of origin, location, and morphology, pathologically derived endosteal bone tissue can be similar to medullary bone tissues described in nonavian dinosaurs. As such, we caution the interpretation of all endosteally derived bone tissue as homologous to avian medullary bone. (c) 2009 Wiley-Liss, Inc.

  9. Radial Basis Function Neural Network Application to Power System Restoration Studies

    PubMed Central

    Sadeghkhani, Iman; Ketabi, Abbas; Feuillet, Rene

    2012-01-01

    One of the most important issues in power system restoration is overvoltages caused by transformer switching. These overvoltages might damage some equipment and delay power system restoration. This paper presents a radial basis function neural network (RBFNN) to study transformer switching overvoltages. To achieve good generalization capability for developed RBFNN, equivalent parameters of the network are added to RBFNN inputs. The developed RBFNN is trained with the worst-case scenario of switching angle and remanent flux and tested for typical cases. The simulated results for a partial of 39-bus New England test system show that the proposed technique can estimate the peak values and duration of switching overvoltages with good accuracy. PMID:22792093

  10. Radial rescaling approach for the eigenvalue problem of a particle in an arbitrarily shaped box.

    PubMed

    Lijnen, Erwin; Chibotaru, Liviu F; Ceulemans, Arnout

    2008-01-01

    In the present work we introduce a methodology for solving a quantum billiard with Dirichlet boundary conditions. The procedure starts from the exactly known solutions for the particle in a circular disk, which are subsequently radially rescaled in such a way that they obey the new boundary conditions. In this way one constructs a complete basis set which can be used to obtain the eigenstates and eigenenergies of the corresponding quantum billiard to a high level of precision. Test calculations for several regular polygons show the efficiency of the method which often requires one or two basis functions to describe the lowest eigenstates with high accuracy.

  11. Short Term Single Station GNSS TEC Prediction Using Radial Basis Function Neural Network

    NASA Astrophysics Data System (ADS)

    Muslim, Buldan; Husin, Asnawi; Efendy, Joni

    2018-04-01

    TEC prediction models for 24 hours ahead have been developed from JOG2 GPS TEC data during 2016. Eleven month of TEC data were used as a training model of the radial basis function neural network (RBFNN) and 1 month of last data (December 2016) is used for the RBFNN model testing. The RBFNN inputs are the previous 24 hour TEC data and the minimum of Dst index during the previous 24 hours. Outputs of the model are 24 ahead TEC prediction. Comparison of model prediction show that the RBFNN model is able to predict the next 24 hours TEC is more accurate than the TEC GIM model.

  12. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity kernel' Kη determines the linearized sensitivity of J to viscosity perturbations defined with respect to a laterally heterogeneous reference earth model, while the `rate-of-loading kernel' K_{dot{σ }} determines the sensitivity to variations in the time derivative of the surface load. By restricting attention to spherically symmetric viscosity perturbations, we also obtain a `radial viscosity kernel' overline{K}_{η } such that the associated contribution to δJ can be written int _{IS}overline{K}_{η }δ ln η dr, where IS denotes the subset of radii lying in solid regions. In order to illustrate this theory, we describe its numerical implementation in the case of a spherically symmetric earth model using a 1-D spectral element method, and calculate sensitivity kernels for a range of realistic observables.

  13. Construction of non-Markovian coarse-grained models employing the Mori-Zwanzig formalism and iterative Boltzmann inversion

    NASA Astrophysics Data System (ADS)

    Yoshimoto, Yuta; Li, Zhen; Kinefuchi, Ikuya; Karniadakis, George Em

    2017-12-01

    We propose a new coarse-grained (CG) molecular simulation technique based on the Mori-Zwanzig (MZ) formalism along with the iterative Boltzmann inversion (IBI). Non-Markovian dissipative particle dynamics (NMDPD) taking into account memory effects is derived in a pairwise interaction form from the MZ-guided generalized Langevin equation. It is based on the introduction of auxiliary variables that allow for the replacement of a non-Markovian equation with a Markovian one in a higher dimensional space. We demonstrate that the NMDPD model exploiting MZ-guided memory kernels can successfully reproduce the dynamic properties such as the mean square displacement and velocity autocorrelation function of a Lennard-Jones system, as long as the memory kernels are appropriately evaluated based on the Volterra integral equation using the force-velocity and velocity-velocity correlations. Furthermore, we find that the IBI correction of a pair CG potential significantly improves the representation of static properties characterized by a radial distribution function and pressure, while it has little influence on the dynamic processes. Our findings suggest that combining the advantages of both the MZ formalism and IBI leads to an accurate representation of both the static and dynamic properties of microscopic systems that exhibit non-Markovian behavior.

  14. Full Three-Dimensional Tomography Experiments in the Western Pacific Region

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Chen, L.; Jordan, T. H.

    2001-12-01

    Two decades of seismic tomography studies have yielded earth models with three-dimensional (3-D) velocity heterogeneities in the mantle on both global and regional scales. With the continuing improvements in inversion techniques, station coverage and computational facilities, seismic tomography has reached a stage at which higher resolution to the structure can only be achieved reliably by employing accurate descriptions between observables and structural parameters, especially in the upper mantle. With this in mind, we have conducted a tomography experiment for the mantle structure beneath the Western Pacific with a full 3-D approach: imaging the 3-D structure using true 3-D Fréchet kernels. In our experiment, we use nearly 20,000 delay times measured at eight discrete frequencies between 10mHz and 45mHz from three-component regional {S} waves, including its multiple reflections from the surface and the CMB. The 3-D Fréchet kernels for these delay times are computed by a normal-mode approach (Zhao, Jordan & Chapman 2000) in which coupling between each pair of modes is accounted for with the exception of cross coupling between spheroidal and toroidal modes. The algorithm is implemented with MPI on the 192-node (and expanding) dual-processor Linux-PC cluster at the University of Southern California. The 3-D radially anisotropic shear-speed model is obtained through a Gaussian-Bayesian inversion. A full description of features in our model will be given in a separate presentation (Chen, Zhao & Jordan, this meeting). Here we discuss in detail the issues related to the calculation of a large number of coupled-mode 3-D kernels for the frequency-dependent delay times and their inversion. We also examine the efficacy of this full 3-D approach in regional high-resolution tomography studies by comparing the results with those in our previous work in which the 3-D structure was obtained by inverting the same delay-time measurements but using computationally more efficient 2-D Fréchet kernels approximated from 3-D by an asymptotic stationary-phase integration across the great-circle plane.

  15. Non-linear 3-D Born shear waveform tomography in Southeast Asia

    NASA Astrophysics Data System (ADS)

    Panning, Mark P.; Cao, Aimin; Kim, Ahyi; Romanowicz, Barbara A.

    2012-07-01

    Southeast (SE) Asia is a tectonically complex region surrounded by many active source regions, thus an ideal test bed for developments in seismic tomography. Much recent development in tomography has been based on 3-D sensitivity kernels based on the first-order Born approximation, but there are potential problems with this approach when applied to waveform data. In this study, we develop a radially anisotropic model of SE Asia using long-period multimode waveforms. We use a theoretical 'cascade' approach, starting with a large-scale Eurasian model developed using 2-D Non-linear Asymptotic Coupling Theory (NACT) sensitivity kernels, and then using a modified Born approximation (nBorn), shown to be more accurate at modelling waveforms, to invert a subset of the data for structure in a subregion (longitude 75°-150° and latitude 0°-45°). In this subregion, the model is parametrized at a spherical spline level 6 (˜200 km). The data set is also inverted using NACT and purely linear 3-D Born kernels. All three final models fit the data well, with just under 80 per cent variance reduction as calculated using the corresponding theory, but the nBorn model shows more detailed structure than the NACT model throughout and has much better resolution at depths greater than 250 km. Based on variance analysis, the purely linear Born kernels do not provide as good a fit to the data due to deviations from linearity for the waveform data set used in this modelling. The nBorn isotropic model shows a stronger fast velocity anomaly beneath the Tibetan Plateau in the depth range of 150-250 km, which disappears at greater depth, consistent with other studies. It also indicates moderate thinning of the high-velocity plate in the middle of Tibet, consistent with a model where Tibet is underplated by Indian lithosphere from the south and Eurasian lithosphere from the north, in contrast to a model with continuous underplating by Indian lithosphere across the entire plateau. The nBorn anisotropic model detects negative ξ anomalies suggestive of vertical deformation associated with subducted slabs and convergent zones at the Himalayan front and Tien Shan at depths near 150 km.

  16. Optimization of waste combinations during in-vessel composting of agricultural waste.

    PubMed

    Varma, V Sudharsan; Kalamdhad, Ajay S; Kumar, Bimlesh

    2017-01-01

    In-vessel composting of agricultural waste is a well-described approach for stabilization of compost within a short time period. Although composting studies have shown the different combinations of waste materials for producing good quality compost, studies of the particular ratio of the waste materials in the mix are still limited. In the present study, composting was conducted with a combination of vegetable waste, cow dung, sawdust and dry leaves using a 550 L rotary drum composter. Application of a radial basis functional neural network was used to simulate the composting process. The model utilizes physico-chemical parameters with different waste materials as input variables and three output variables: volatile solids, soluble biochemical oxygen demand and carbon dioxide evolution. For the selected model, the coefficient of determination reached the high value of 0.997. The complicated interaction of agricultural waste components during composting makes it a nonlinear problem so it is difficult to find the optimal waste combinations for producing quality compost. Optimization of a trained radial basis functional model has yielded the optimal proportion as 62 kg, 17 kg and 9 kg for vegetable waste, cow dung and sawdust, respectively. The results showed that the predictive radial basis functional model described for drum composting of agricultural waste was well suited for organic matter degradation and can be successfully applied.

  17. Adaptive prognosis of lithium-ion batteries based on the combination of particle filters and radial basis function neural networks

    NASA Astrophysics Data System (ADS)

    Sbarufatti, Claudio; Corbetta, Matteo; Giglio, Marco; Cadini, Francesco

    2017-03-01

    Lithium-Ion rechargeable batteries are widespread power sources with applications to consumer electronics, electrical vehicles, unmanned aerial and spatial vehicles, etc. The failure to supply the required power levels may lead to severe safety and economical consequences. Thus, in view of the implementation of adequate maintenance strategies, the development of diagnostic and prognostic tools for monitoring the state of health of the batteries and predicting their remaining useful life is becoming a crucial task. Here, we propose a method for predicting the end of discharge of Li-Ion batteries, which stems from the combination of particle filters with radial basis function neural networks. The major innovation lies in the fact that the radial basis function model is adaptively trained on-line, i.e., its parameters are identified in real time by the particle filter as new observations of the battery terminal voltage become available. By doing so, the prognostic algorithm achieves the flexibility needed to provide sound end-of-discharge time predictions as the charge-discharge cycles progress, even in presence of anomalous behaviors due to failures or unforeseen operating conditions. The method is demonstrated with reference to actual Li-Ion battery discharge data contained in the prognostics data repository of the NASA Ames Research Center database.

  18. The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Nikpour, Ahmad

    2013-09-01

    In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.

  19. MO-FG-CAMPUS-TeP1-05: Rapid and Efficient 3D Dosimetry for End-To-End Patient-Specific QA of Rotational SBRT Deliveries Using a High-Resolution EPID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Han, B; Xing, L

    2016-06-15

    Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less

  20. On the efficiency of treating singularities in triatomic variational vibrational computations. The vibrational states of H(+)3 up to dissociation.

    PubMed

    Szidarovszky, Tamás; Császár, Attila G; Czakó, Gábor

    2010-08-01

    Several techniques of varying efficiency are investigated, which treat all singularities present in the triatomic vibrational kinetic energy operator given in orthogonal internal coordinates of the two distances-one angle type. The strategies are based on the use of a direct-product basis built from one-dimensional discrete variable representation (DVR) bases corresponding to the two distances and orthogonal Legendre polynomials, or the corresponding Legendre-DVR basis, corresponding to the angle. The use of Legendre functions ensures the efficient treatment of the angular singularity. Matrix elements of the singular radial operators are calculated employing DVRs using the quadrature approximation as well as special DVRs satisfying the boundary conditions and thus allowing for the use of exact DVR expressions. Potential optimized (PO) radial DVRs, based on one-dimensional Hamiltonians with potentials obtained by fixing or relaxing the two non-active coordinates, are also studied. The numerical calculations employed Hermite-DVR, spherical-oscillator-DVR, and Bessel-DVR bases as the primitive radial functions. A new analytical formula is given for the determination of the matrix elements of the singular radial operator using the Bessel-DVR basis. The usually claimed failure of the quadrature approximation in certain singular integrals is revisited in one and three dimensions. It is shown that as long as no potential optimization is carried out the quadrature approximation works almost as well as the exact DVR expressions. If wave functions with finite amplitude at the boundary are to be computed, the basis sets need to meet the required boundary conditions. The present numerical results also confirm that PO-DVRs should be constructed employing relaxed potentials and PO-DVRs can be useful for optimizing quadrature points for calculations applying large coordinate intervals and describing large-amplitude motions. The utility and efficiency of the different algorithms is demonstrated by the computation of converged near-dissociation vibrational energy levels for the H molecular ion.

  1. MPACT Subgroup Self-Shielding Efficiency Improvements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane; Liu, Yuxuan; Collins, Benjamin S.

    Recent developments to improve the efficiency of the MOC solvers in MPACT have yielded effective kernels that loop over several energy groups at once, rather that looping over one group at a time. These kernels have produced roughly a 2x speedup on the MOC sweeping time during eigenvalue calculation. However, the self-shielding subgroup calculation had not been reevaluated to take advantage of these new kernels, which typically requires substantial solve time. The improvements covered in this report start by integrating the multigroup kernel concepts into the subgroup calculation, which are then used as the basis for further extensions. The nextmore » improvement that is covered is what is currently being termed as “Lumped Parameter MOC”. Because the subgroup calculation is a purely fixed source problem and multiple sweeps are performed only to update the boundary angular fluxes, the sweep procedure can be condensed to allow for the instantaneous propagation of the flux across a spatial domain, without the need to sweep along all segments in a ray. Once the boundary angular fluxes are considered to be converged, an additional sweep that will tally the scalar flux is completed. The last improvement that is investigated is the possible reduction of the number of azimuthal angles per octant in the shielding sweep. Typically 16 azimuthal angles per octant are used for self-shielding and eigenvalue calculations, but it is possible that the self-shielding sweeps are less sensitive to the number of angles than the full eigenvalue calculation.« less

  2. Biochemical and molecular characterization of Avena indolines and their role in kernel texture.

    PubMed

    Gazza, Laura; Taddei, Federica; Conti, Salvatore; Gazzelloni, Gloria; Muccilli, Vera; Janni, Michela; D'Ovidio, Renato; Alfieri, Michela; Redaelli, Rita; Pogna, Norberto E

    2015-02-01

    Among cereals, Avena sativa is characterized by an extremely soft endosperm texture, which leads to some negative agronomic and technological traits. On the basis of the well-known softening effect of puroindolines in wheat kernel texture, in this study, indolines and their encoding genes are investigated in Avena species at different ploidy levels. Three novel 14 kDa proteins, showing a central hydrophobic domain with four tryptophan residues and here named vromindoline (VIN)-1,2 and 3, were identified. Each VIN protein in diploid oat species was found to be synthesized by a single Vin gene whereas, in hexaploid A. sativa, three Vin-1, three Vin-2 and two Vin-3 genes coding for VIN-1, VIN-2 and VIN-3, respectively, were described and assigned to the A, C or D genomes based on similarity to their counterparts in diploid species. Expression of oat vromindoline transgenes in the extra-hard durum wheat led to accumulation of vromindolines in the endosperm and caused an approximate 50 % reduction of grain hardness, suggesting a central role for vromindolines in causing the extra-soft texture of oat grain. Further, hexaploid oats showed three orthologous genes coding for avenoindolines A and B, with five or three tryptophan residues, respectively, but very low amounts of avenoindolines were found in mature kernels. The present results identify a novel protein family affecting cereal kernel texture and would further elucidate the phylogenetic evolution of Avena genus.

  3. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-08-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  4. Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.

    PubMed

    Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung

    2010-11-01

    Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.

  5. New measurements of radial velocities in clusters of galaxies. II

    NASA Astrophysics Data System (ADS)

    Proust, D.; Mazure, A.; Sodre, L.; Capelato, H.; Lund, G.

    1988-03-01

    Heliocentric radial velocities are determined for 100 galaxies in five clusters, on the basis of 380-518-nm observations obtained using a CCD detector coupled by optical fibers to the OCTOPUS multiobject spectrograph at the Cassegrain focus of the 3.6-m telescope at ESO La Silla. The data-reduction procedures and error estimates are discussed, and the results are presented in tables and graphs and briefly characterized.

  6. Computational Modeling Basis in the Photostress Recovery Model (PREMO)

    DTIC Science & Technology

    2014-09-01

    classes of filters, for radial frequency selectivity and for orientation selectivity. Our current implementation accounts for the radial frequency...glare function and its attribution to the components of ocular scatter. Chairman’s Report CIE TC 1-18, Commission de l’Eclairage. 14. Watson, A...radiometric to photometric units to account for the differential spectral sensitivity of the eye. The spectral luminosity function for photopic vision is

  7. Proximate analysis of five wild fruits of Mozambique.

    PubMed

    Magaia, Telma; Uamusse, Amália; Sjöholm, Ingegerd; Skog, Kerstin

    2013-01-01

    Mozambique is rich in wild fruit trees, most of which produce fleshy fruits commonly consumed in rural communities, especially during dry seasons. However, information on their content of macronutrients is scarce. Five wild fruit species (Adansonia digitata, Landolphia kirkii, Sclerocarya birrea, Salacia kraussii, and Vangueria infausta) from different districts in Mozambique were selected for the study. The contents of dry matter, fat, protein, ash, sugars, pH, and titratable acidity were determined in the fruit pulps. Also kernels of A. digitata and S. birrea were included in the study. The protein content in the pulp was below 5 g/100 g of dry matter, but a daily intake of 100 g fresh wild fruits would provide up to 11% of the recommended daily intake for children from 4 to 8 years old. The sugar content varied between 2.3% and 14.4% fresh weight. The pH was below 3, except for Salacia kraussii, for which it was slightly below 7. Kernels of A. digitata contained, on average, 39.2% protein and 38.0% fat, and S. birrea kernels 32.6% protein and 60.7% fat. The collection of nutritional information may serve as a basis for increased consumption and utilization.

  8. Proximate Analysis of Five Wild Fruits of Mozambique

    PubMed Central

    Uamusse, Amália; Sjöholm, Ingegerd

    2013-01-01

    Mozambique is rich in wild fruit trees, most of which produce fleshy fruits commonly consumed in rural communities, especially during dry seasons. However, information on their content of macronutrients is scarce. Five wild fruit species (Adansonia digitata, Landolphia kirkii, Sclerocarya birrea, Salacia kraussii, and Vangueria infausta) from different districts in Mozambique were selected for the study. The contents of dry matter, fat, protein, ash, sugars, pH, and titratable acidity were determined in the fruit pulps. Also kernels of A. digitata and S. birrea were included in the study. The protein content in the pulp was below 5 g/100 g of dry matter, but a daily intake of 100 g fresh wild fruits would provide up to 11% of the recommended daily intake for children from 4 to 8 years old. The sugar content varied between 2.3% and 14.4% fresh weight. The pH was below 3, except for Salacia kraussii, for which it was slightly below 7. Kernels of A. digitata contained, on average, 39.2% protein and 38.0% fat, and S. birrea kernels 32.6% protein and 60.7% fat. The collection of nutritional information may serve as a basis for increased consumption and utilization. PMID:23983641

  9. Body-wave traveltime and amplitude shifts from asymptotic travelling wave coupling

    USGS Publications Warehouse

    Pollitz, F.

    2006-01-01

    We explore the sensitivity of finite-frequency body-wave traveltimes and amplitudes to perturbations in 3-D seismic velocity structure relative to a spherically symmetric model. Using the approach of coupled travelling wave theory, we consider the effect of a structural perturbation on an isolated portion of the seismogram. By convolving the spectrum of the differential seismogram with the spectrum of a narrow window taper, and using a Taylor's series expansion for wavenumber as a function of frequency on a mode dispersion branch, we derive semi-analytic expressions for the sensitivity kernels. Far-field effects of wave interactions with the free surface or internal discontinuities are implicitly included, as are wave conversions upon scattering. The kernels may be computed rapidly for the purpose of structural inversions. We give examples of traveltime sensitivity kernels for regional wave propagation at 1 Hz. For the direct SV wave in a simple crustal velocity model, they are generally complicated because of interfering waves generated by interactions with the free surface and the Mohorovic??ic?? discontinuity. A large part of the interference effects may be eliminated by restricting the travelling wave basis set to those waves within a certain range of horizontal phase velocity. ?? Journal compilation ?? 2006 RAS.

  10. Georeferencing CAMS data: Polynomial rectification and beyond

    NASA Astrophysics Data System (ADS)

    Yang, Xinghe

    The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full integration of data preprocessing and rectification techniques under Erdas Imagine development environment. The final root mean square (RMS) errors for the test CAMS data are about two pixels which are compatible with the random RMS errors existed in the reference map coordinates.

  11. Structured functional additive regression in reproducing kernel Hilbert spaces

    PubMed Central

    Zhu, Hongxiao; Yao, Fang; Zhang, Hao Helen

    2013-01-01

    Summary Functional additive models (FAMs) provide a flexible yet simple framework for regressions involving functional predictors. The utilization of data-driven basis in an additive rather than linear structure naturally extends the classical functional linear model. However, the critical issue of selecting nonlinear additive components has been less studied. In this work, we propose a new regularization framework for the structure estimation in the context of Reproducing Kernel Hilbert Spaces. The proposed approach takes advantage of the functional principal components which greatly facilitates the implementation and the theoretical analysis. The selection and estimation are achieved by penalized least squares using a penalty which encourages the sparse structure of the additive components. Theoretical properties such as the rate of convergence are investigated. The empirical performance is demonstrated through simulation studies and a real data application. PMID:25013362

  12. Molecular genetic basis of pod corn (Tunicate maize)

    PubMed Central

    Wingen, Luzie U.; Münster, Thomas; Faigl, Wolfram; Deleu, Wim; Sommer, Hans; Saedler, Heinz; Theißen, Günter

    2012-01-01

    Pod corn is a classic morphological mutant of maize in which the mature kernels of the cob are covered by glumes, in contrast to generally grown maize varieties in which kernels are naked. Pod corn, known since pre-Columbian times, is the result of a dominant gain-of-function mutation at the Tunicate (Tu) locus. Some classic articles of 20th century maize genetics reported that the mutant Tu locus is complex, but molecular details remained elusive. Here, we show that pod corn is caused by a cis-regulatory mutation and duplication of the ZMM19 MADS-box gene. Although the WT locus contains a single-copy gene that is expressed in vegetative organs only, mutation and duplication of ZMM19 in Tu lead to ectopic expression of the gene in the inflorescences, thus conferring vegetative traits to reproductive organs. PMID:22517751

  13. A complex valued radial basis function network for equalization of fast time varying channels.

    PubMed

    Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R

    1999-01-01

    This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.

  14. A two-layered classifier based on the radial basis function for the screening of thalassaemia.

    PubMed

    Masala, G L; Golosio, B; Cutzu, R; Pola, R

    2013-11-01

    The thalassaemias are blood disorders with hereditary transmission. Their distribution is global, with particular incidence in areas affected by malaria. Their diagnosis is mainly based on haematologic and genetic analyses. The aim of this study was to differentiate between persons with the thalassaemia trait and normal subjects by inspecting characteristics of haemochromocytometric data. The paper proposes an original method that is useful in screening activity for thalassaemia classification. A complete working system with a friendly graphical user interface is presented. A unique feature of the presented work is the adoption of a two-layered classification system based on Radial basis function, which improves the performance of the system. © 2013 Elsevier Ltd. All rights reserved.

  15. Maize w3 disrupts homogentisate solanesyl transferase (ZmHst) and reveals a plastoquinone-9 independent path for phytoene desaturation and tocopherol accumulation in kernels

    USDA-ARS?s Scientific Manuscript database

    Maize white seedling 3 (w3) has been used to study carotenoid deficiency for almost 100 years, although its genetic basis remained unknown. We show here that w3 phenotype is caused by disruption of homogentisate solanesyl transferase (HST), which catalyzes the first committed step in plastoquinone-9...

  16. Designing an artificial neural network using radial basis function to model exergetic efficiency of nanofluids in mini double pipe heat exchanger

    NASA Astrophysics Data System (ADS)

    Ghasemi, Nahid; Aghayari, Reza; Maddah, Heydar

    2018-06-01

    The present study aims at predicting and optimizing exergetic efficiency of TiO2-Al2O3/water nanofluid at different Reynolds numbers, volume fractions and twisted ratios using Artificial Neural Networks (ANN) and experimental data. Central Composite Design (CCD) and cascade Radial Basis Function (RBF) were used to display the significant levels of the analyzed factors on the exergetic efficiency. The size of TiO2-Al2O3/water nanocomposite was 20-70 nm. The parameters of ANN model were adapted by a training algorithm of radial basis function (RBF) with a wide range of experimental data set. Total mean square error and correlation coefficient were used to evaluate the results which the best result was obtained from double layer perceptron neural network with 30 neurons in which total Mean Square Error(MSE) and correlation coefficient (R2) were equal to 0.002 and 0.999, respectively. This indicated successful prediction of the network. Moreover, the proposed equation for predicting exergetic efficiency was extremely successful. According to the optimal curves, the optimum designing parameters of double pipe heat exchanger with inner twisted tape and nanofluid under the constrains of exergetic efficiency 0.937 are found to be Reynolds number 2500, twisted ratio 2.5 and volume fraction( v/v%) 0.05.

  17. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  18. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  19. Prediction of heterotrimeric protein complexes by two-phase learning using neighboring kernels

    PubMed Central

    2014-01-01

    Background Protein complexes play important roles in biological systems such as gene regulatory networks and metabolic pathways. Most methods for predicting protein complexes try to find protein complexes with size more than three. It, however, is known that protein complexes with smaller sizes occupy a large part of whole complexes for several species. In our previous work, we developed a method with several feature space mappings and the domain composition kernel for prediction of heterodimeric protein complexes, which outperforms existing methods. Results We propose methods for prediction of heterotrimeric protein complexes by extending techniques in the previous work on the basis of the idea that most heterotrimeric protein complexes are not likely to share the same protein with each other. We make use of the discriminant function in support vector machines (SVMs), and design novel feature space mappings for the second phase. As the second classifier, we examine SVMs and relevance vector machines (RVMs). We perform 10-fold cross-validation computational experiments. The results suggest that our proposed two-phase methods and SVM with the extended features outperform the existing method NWE, which was reported to outperform other existing methods such as MCL, MCODE, DPClus, CMC, COACH, RRW, and PPSampler for prediction of heterotrimeric protein complexes. Conclusions We propose two-phase prediction methods with the extended features, the domain composition kernel, SVMs and RVMs. The two-phase method with the extended features and the domain composition kernel using SVM as the second classifier is particularly useful for prediction of heterotrimeric protein complexes. PMID:24564744

  20. QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility

    NASA Astrophysics Data System (ADS)

    Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.

    2013-11-01

    One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).

  1. Studies of fatty acid composition, physicochemical and thermal properties, and crystallization behavior of mango kernel fats from various Thai varieties.

    PubMed

    Sonwai, Sopark; Ponprachanuvut, Punnee

    2014-01-01

    Mango kernel fat (MKF) has received attention in recent years due to the resemblance between its characteristics and those of cocoa butter (CB). In this work, fatty acid (FA) composition, physicochemical and thermal properties and crystallization behavior of MKFs obtained from four varieties of Thai mangoes: Keaw-Morakot (KM), Keaw-Sawoey (KS), Nam-Dokmai (ND) and Aok-Rong (AR), were characterized. The fat content of the mango kernels was 6.40, 5.78, 5.73 and 7.74% (dry basis) for KM, KS, ND and AR, respectively. The analysis of FA composition revealed that all four cultivars had oleic and stearic acids as the main FA components with ND and AR exhibiting highest and lowest stearic acid content, respectively. ND had the highest slip melting point and solid fat content (SFC) followed by KS, KM and AR. All fat samples exhibited high SFC at 20℃ and below. They melted slowly as the temperature increased and became complete liquids as the temperature approached 35°C. During static isothermal crystallization at 20°C, ND displayed the highest Avrami rate constant k followed by KS, KM and AR, indicating that the crystallization was fastest for ND and slowest for AR. The Avrami exponent n of all samples ranged from 0.89 to 1.73. The x-ray diffraction analysis showed that all MKFs crystallized into a mixture of pseudo-β', β', sub-β and β structures with β' being the predominant polymorph. Finally, the crystals of the kernel fats from all mango varieties exhibited spherulitic morphology.

  2. Big geo data surface approximation using radial basis functions: A comparative study

    NASA Astrophysics Data System (ADS)

    Majdisova, Zuzana; Skala, Vaclav

    2017-12-01

    Approximation of scattered data is often a task in many engineering problems. The Radial Basis Function (RBF) approximation is appropriate for big scattered datasets in n-dimensional space. It is a non-separable approximation, as it is based on the distance between two points. This method leads to the solution of an overdetermined linear system of equations. In this paper the RBF approximation methods are briefly described, a new approach to the RBF approximation of big datasets is presented, and a comparison for different Compactly Supported RBFs (CS-RBFs) is made with respect to the accuracy of the computation. The proposed approach uses symmetry of a matrix, partitioning the matrix into blocks and data structures for storage of the sparse matrix. The experiments are performed for synthetic and real datasets.

  3. Neural network approach for the calculation of potential coefficients in quantum mechanics

    NASA Astrophysics Data System (ADS)

    Ossandón, Sebastián; Reyes, Camilo; Cumsille, Patricio; Reyes, Carlos M.

    2017-05-01

    A numerical method based on artificial neural networks is used to solve the inverse Schrödinger equation for a multi-parameter class of potentials. First, the finite element method was used to solve repeatedly the direct problem for different parametrizations of the chosen potential function. Then, using the attainable eigenvalues as a training set of the direct radial basis neural network a map of new eigenvalues was obtained. This relationship was later inverted and refined by training an inverse radial basis neural network, allowing the calculation of the unknown parameters and therefore estimating the potential function. Three numerical examples are presented in order to prove the effectiveness of the method. The results show that the method proposed has the advantage to use less computational resources without a significant accuracy loss.

  4. Variable Neural Adaptive Robust Control: A Switched System Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Hu, Jianghai; Zak, Stanislaw H.

    2015-05-01

    Variable neural adaptive robust control strategies are proposed for the output tracking control of a class of multi-input multi-output uncertain systems. The controllers incorporate a variable-structure radial basis function (RBF) network as the self-organizing approximator for unknown system dynamics. The variable-structure RBF network solves the problem of structure determination associated with fixed-structure RBF networks. It can determine the network structure on-line dynamically by adding or removing radial basis functions according to the tracking performance. The structure variation is taken into account in the stability analysis of the closed-loop system using a switched system approach with the aid of the piecewisemore » quadratic Lyapunov function. The performance of the proposed variable neural adaptive robust controllers is illustrated with simulations.« less

  5. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  6. Computer Security Models

    DTIC Science & Technology

    1984-09-01

    Verification Technique for a Class of Security Kernels," International Symposium on Programming , Lecture Notes in Computer Science 137, Springer-Verlag, New York...September 1984 MTR9S31 " J. K. Millen Computer Security C. M. Cerniglia Models * 0 Ne c - ¢- C. S• ~CONTRACT SPONSOR OUSDRE/C31 & ESO/ALEE...ABSTRACT The purpose of this report is to provide a basis for evaluating security models in the context of secure computer system development

  7. A support vector machine-based method to identify mild cognitive impairment with multi-level characteristics of magnetic resonance imaging.

    PubMed

    Long, Zhuqing; Jing, Bin; Yan, Huagang; Dong, Jianxin; Liu, Han; Mo, Xiao; Han, Ying; Li, Haiyun

    2016-09-07

    Mild cognitive impairment (MCI) represents a transitional state between normal aging and Alzheimer's disease (AD). Non-invasive diagnostic methods are desirable to identify MCI for early therapeutic interventions. In this study, we proposed a support vector machine (SVM)-based method to discriminate between MCI patients and normal controls (NCs) using multi-level characteristics of magnetic resonance imaging (MRI). This method adopted a radial basis function (RBF) as the kernel function, and a grid search method to optimize the two parameters of SVM. The calculated characteristics, i.e., the Hurst exponent (HE), amplitude of low-frequency fluctuations (ALFF), regional homogeneity (ReHo) and gray matter density (GMD), were adopted as the classification features. A leave-one-out cross-validation (LOOCV) was used to evaluate the classification performance of the method. Applying the proposed method to the experimental data from 29 MCI patients and 33 healthy subjects, we achieved a classification accuracy of up to 96.77%, with a sensitivity of 93.10% and a specificity of 100%, and the area under the curve (AUC) yielded up to 0.97. Furthermore, the most discriminative features for classification were found to predominantly involve default-mode regions, such as hippocampus (HIP), parahippocampal gyrus (PHG), posterior cingulate gyrus (PCG) and middle frontal gyrus (MFG), and subcortical regions such as lentiform nucleus (LN) and amygdala (AMYG). Therefore, our method is promising in distinguishing MCI patients from NCs and may be useful for the diagnosis of MCI. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  8. Estimating local scaling properties for the classification of interstitial lung disease patterns

    NASA Astrophysics Data System (ADS)

    Huber, Markus B.; Nagarajan, Mahesh B.; Leinsinger, Gerda; Ray, Lawrence A.; Wismueller, Axel

    2011-03-01

    Local scaling properties of texture regions were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honeycombing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and the estimation of local scaling properties with Scaling Index Method (SIM). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions including the Bonferroni correction. The best classification results were obtained by the set of SIM features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers with the highest accuracy (94.1%, 93.7%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced texture features using local scaling properties can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  9. Automatic classification of written descriptions by healthy adults: An overview of the application of natural language processing and machine learning techniques to clinical discourse analysis

    PubMed Central

    Toledo, Cíntia Matsuda; Cunha, Andre; Scarton, Carolina; Aluísio, Sandra

    2014-01-01

    Discourse production is an important aspect in the evaluation of brain-injured individuals. We believe that studies comparing the performance of brain-injured subjects with that of healthy controls must use groups with compatible education. A pioneering application of machine learning methods using Brazilian Portuguese for clinical purposes is described, highlighting education as an important variable in the Brazilian scenario. Objective The aims were to describe how to: (i) develop machine learning classifiers using features generated by natural language processing tools to distinguish descriptions produced by healthy individuals into classes based on their years of education; and (ii) automatically identify the features that best distinguish the groups. Methods The approach proposed here extracts linguistic features automatically from the written descriptions with the aid of two Natural Language Processing tools: Coh-Metrix-Port and AIC. It also includes nine task-specific features (three new ones, two extracted manually, besides description time; type of scene described – simple or complex; presentation order – which type of picture was described first; and age). In this study, the descriptions by 144 of the subjects studied in Toledo18 were used,which included 200 healthy Brazilians of both genders. Results and Conclusion A Support Vector Machine (SVM) with a radial basis function (RBF) kernel is the most recommended approach for the binary classification of our data, classifying three of the four initial classes. CfsSubsetEval (CFS) is a strong candidate to replace manual feature selection methods. PMID:29213908

  10. Image classification of human carcinoma cells using complex wavelet-based covariance descriptors.

    PubMed

    Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A Enis; Cetin-Atalay, Rengul

    2013-01-01

    Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-[Formula: see text]WT) coefficients and several morphological attributes are computed. Directionally selective DT-[Formula: see text]WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html.

  11. Image Classification of Human Carcinoma Cells Using Complex Wavelet-Based Covariance Descriptors

    PubMed Central

    Keskin, Furkan; Suhre, Alexander; Kose, Kivanc; Ersahin, Tulin; Cetin, A. Enis; Cetin-Atalay, Rengul

    2013-01-01

    Cancer cell lines are widely used for research purposes in laboratories all over the world. Computer-assisted classification of cancer cells can alleviate the burden of manual labeling and help cancer research. In this paper, we present a novel computerized method for cancer cell line image classification. The aim is to automatically classify 14 different classes of cell lines including 7 classes of breast and 7 classes of liver cancer cells. Microscopic images containing irregular carcinoma cell patterns are represented by subwindows which correspond to foreground pixels. For each subwindow, a covariance descriptor utilizing the dual-tree complex wavelet transform (DT-WT) coefficients and several morphological attributes are computed. Directionally selective DT-WT feature parameters are preferred primarily because of their ability to characterize edges at multiple orientations which is the characteristic feature of carcinoma cell line images. A Support Vector Machine (SVM) classifier with radial basis function (RBF) kernel is employed for final classification. Over a dataset of 840 images, we achieve an accuracy above 98%, which outperforms the classical covariance-based methods. The proposed system can be used as a reliable decision maker for laboratory studies. Our tool provides an automated, time- and cost-efficient analysis of cancer cell morphology to classify different cancer cell lines using image-processing techniques, which can be used as an alternative to the costly short tandem repeat (STR) analysis. The data set used in this manuscript is available as supplementary material through http://signal.ee.bilkent.edu.tr/cancerCellLineClassificationSampleImages.html. PMID:23341908

  12. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, Qiang, E-mail: jyanghkbu@gmail.com; Yang, Jiang, E-mail: qd2125@columbia.edu

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simplemore » ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.« less

  13. Inhibition of Fusarium Growth and Mycotoxin Production in Culture Medium and in Maize Kernels by Natural Phenolic Acids.

    PubMed

    Ferruz, Elena; Loran, Susana; Herrera, Marta; Gimenez, Isabel; Bervis, Noemi; Barcena, Carmen; Carramiñana, Juan Jose; Juan, Teresa; Herrera, Antonio; Ariño, Agustin

    2016-10-01

    The possible role of natural phenolic compounds in inhibiting fungal growth and toxin production has been of recent interest as an alternative strategy to the use of chemical fungicides for the maintenance of food safety. Fusarium is a worldwide fungal genus mainly associated with cereal crops. The most important Fusarium mycotoxins are trichothecenes, zearalenone, and fumonisins. This study was conducted to evaluate the potential of four natural phenolic acids (caffeic, ferulic, p-coumaric, and chlorogenic) for the control of mycelial growth and mycotoxin production by six toxigenic species of Fusarium . The addition of phenolic acids to corn meal agar had a marked inhibitory effect on the radial growth of all Fusarium species at levels of 2.5 to 10 mM in a dose-response pattern, causing total inhibition (100%) in all species except F. sporotrichioides and F. langsethiae . However, the effects of phenolic acids on mycotoxin production in maize kernels were less evident than the effects on growth. The fungal species differed in their responses to the phenolic acid treatments, and significant reductions in toxin concentrations were observed only for T-2 and HT-2 (90% reduction) and zearalenone (48 to 77% reduction). These results provide data that could be used for developing pre- and postharvest strategies for controlling Fusarium infection and subsequent toxin production in cereal grains.

  14. Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas

    In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less

  15. Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

    DOE PAGES

    Agarwal, Sapan; Quach, Tu -Thach; Parekh, Ojas; ...

    2016-01-06

    In this study, the exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-basedmore » architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.« less

  16. Reliability analysis of C-130 turboprop engine components using artificial neural network

    NASA Astrophysics Data System (ADS)

    Qattan, Nizar A.

    In this study, we predict the failure rate of Lockheed C-130 Engine Turbine. More than thirty years of local operational field data were used for failure rate prediction and validation. The Weibull regression model and the Artificial Neural Network model including (feed-forward back-propagation, radial basis neural network, and multilayer perceptron neural network model); will be utilized to perform this study. For this purpose, the thesis will be divided into five major parts. First part deals with Weibull regression model to predict the turbine general failure rate, and the rate of failures that require overhaul maintenance. The second part will cover the Artificial Neural Network (ANN) model utilizing the feed-forward back-propagation algorithm as a learning rule. The MATLAB package will be used in order to build and design a code to simulate the given data, the inputs to the neural network are the independent variables, the output is the general failure rate of the turbine, and the failures which required overhaul maintenance. In the third part we predict the general failure rate of the turbine and the failures which require overhaul maintenance, using radial basis neural network model on MATLAB tool box. In the fourth part we compare the predictions of the feed-forward back-propagation model, with that of Weibull regression model, and radial basis neural network model. The results show that the failure rate predicted by the feed-forward back-propagation artificial neural network model is closer in agreement with radial basis neural network model compared with the actual field-data, than the failure rate predicted by the Weibull model. By the end of the study, we forecast the general failure rate of the Lockheed C-130 Engine Turbine, the failures which required overhaul maintenance and six categorical failures using multilayer perceptron neural network (MLP) model on DTREG commercial software. The results also give an insight into the reliability of the engine turbine under actual operating conditions, which can be used by aircraft operators for assessing system and component failures and customizing the maintenance programs recommended by the manufacturer.

  17. A generalized geologic map of Mars.

    NASA Technical Reports Server (NTRS)

    Carr, M. H.; Masursky, H.; Saunders, R. S.

    1973-01-01

    A geologic map of Mars has been constructed largely on the basis of photographic evidence. Four classes of units are recognized: (1) primitive cratered terrain, (2) sparsely cratered volcanic eolian plains, (3) circular radially symmetric volcanic constructs such as shield volcanoes, domes, and craters, and (4) tectonic erosional units such as chaotic and channel deposits. Grabens are the main structural features; compressional and strike slip features are almost completely absent. Most grabens are part of a set radial to the main volcanic area, Tharsis.

  18. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    PubMed

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  19. Analysis and Comparison with DNS of a Stochastic Model for the Relative Motion of High-Stokes-Number Particles in Isotropic Turbulence

    NASA Astrophysics Data System (ADS)

    Dhariwal, Rohit; Rani, Sarma; Koch, Donald

    2015-11-01

    In an earlier work, Rani, Dhariwal, and Koch (JFM, Vol. 756, 2014) developed an analytical closure for the diffusion current in the PDF transport equation describing the relative motion of high-Stokes-number particle pairs in isotropic turbulence. In this study, an improved closure was developed for the diffusion coefficient, such that the motion of the particle-pair center of mass is taken into account. Using the earlier and the new analytical closures, Langevin simulations of pair relative motion were performed for four particle Stokes numbers, Stη = 10 , 20 , 40 , 80 and at two Taylor micro-scale Reynolds numbers Reλ = 76 , 131 . Detailed comparisons of the analytical model predictions with those of DNS were undertaken. It is seen that the pair relative motion statistics obtained from the improved theory show excellent agreement with the DNS statistics. The radial distribution functions (RDFs), and relative velocity PDFs obtained from the improved-closure-based Langevin simulations are found to be in very good agreement with those from DNS. It was found that the RDFs and relative velocity RMS increased with Reλ for all Stη . The collision kernel also increased strongly with Reλ , since it depended on the RDF and the radial relative velocities.

  20. 7 CFR 810.602 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of flaxseed kernels that are badly ground-damaged, badly weather... instructions. Also, underdeveloped, shriveled, and small pieces of flaxseed kernels removed in properly... recleaning. (c) Heat-damaged kernels. Kernels and pieces of flaxseed kernels that are materially discolored...

  1. Kernel Abortion in Maize 1

    PubMed Central

    Hanft, Jonathan M.; Jones, Robert J.

    1986-01-01

    Kernels cultured in vitro were induced to abort by high temperature (35°C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35°C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth. PMID:16664846

  2. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    PubMed

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  3. 7 CFR 810.1202 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kernels. Kernels, pieces of rye kernels, and other grains that are badly ground-damaged, badly weather.... Also, underdeveloped, shriveled, and small pieces of rye kernels removed in properly separating the...-damaged kernels. Kernels, pieces of rye kernels, and other grains that are materially discolored and...

  4. Free Vibration Study of Anti-Symmetric Angle-Ply Laminated Plates under Clamped Boundary Conditions

    NASA Astrophysics Data System (ADS)

    Viswanathan, K. K.; Karthik, K.; Sanyasiraju, Y. V. S. S.; Aziz, Z. A.

    2016-11-01

    Two type of numerical approach namely, Radial Basis Function and Spline approximation, used to analyse the free vibration of anti-symmetric angle-ply laminated plates under clamped boundary conditions. The equations of motion are derived using YNS theory under first order shear deformation. By assuming the solution in separable form, coupled differential equations obtained in term of mid-plane displacement and rotational functions. The coupled differential is then approximated using Spline function and radial basis function to obtain the generalize eigenvalue problem and parametric studies are made to investigate the effect of aspect ratio, length-to-thickness ratio, number of layers, fibre orientation and material properties with respect to the frequency parameter. Some results are compared with the existing literature and other new results are given in tables and graphs.

  5. Decoupling control of a five-phase fault-tolerant permanent magnet motor by radial basis function neural network inverse

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Liu, Guohai; Xu, Dezhi; Xu, Liang; Xu, Gaohong; Aamir, Nazir

    2018-05-01

    This paper proposes a new decoupled control for a five-phase in-wheel fault-tolerant permanent magnet (IW-FTPM) motor drive, in which radial basis function neural network inverse (RBF-NNI) and internal model control (IMC) are combined. The RBF-NNI system is introduced into original system to construct a pseudo-linear system, and IMC is used as a robust controller. Hence, the newly proposed control system incorporates the merits of the IMC and RBF-NNI methods. In order to verify the proposed strategy, an IW-FTPM motor drive is designed based on dSPACE real-time control platform. Then, the experimental results are offered to verify that the d-axis current and the rotor speed are successfully decoupled. Besides, the proposed motor drive exhibits strong robustness even under load torque disturbance.

  6. A Computationally Inexpensive Optimal Guidance via Radial-Basis-Function Neural Network for Autonomous Soft Landing on Asteroids

    PubMed Central

    Zhang, Peng; Liu, Keping; Zhao, Bo; Li, Yuanchun

    2015-01-01

    Optimal guidance is essential for the soft landing task. However, due to its high computational complexities, it is hardly applied to the autonomous guidance. In this paper, a computationally inexpensive optimal guidance algorithm based on the radial basis function neural network (RBFNN) is proposed. The optimization problem of the trajectory for soft landing on asteroids is formulated and transformed into a two-point boundary value problem (TPBVP). Combining the database of initial states with the relative initial co-states, an RBFNN is trained offline. The optimal trajectory of the soft landing is determined rapidly by applying the trained network in the online guidance. The Monte Carlo simulations of soft landing on the Eros433 are performed to demonstrate the effectiveness of the proposed guidance algorithm. PMID:26367382

  7. Observer-Based Adaptive Neural Network Control for Nonlinear Systems in Nonstrict-Feedback Form.

    PubMed

    Chen, Bing; Zhang, Huaguang; Lin, Chong

    2016-01-01

    This paper focuses on the problem of adaptive neural network (NN) control for a class of nonlinear nonstrict-feedback systems via output feedback. A novel adaptive NN backstepping output-feedback control approach is first proposed for nonlinear nonstrict-feedback systems. The monotonicity of system bounding functions and the structure character of radial basis function (RBF) NNs are used to overcome the difficulties that arise from nonstrict-feedback structure. A state observer is constructed to estimate the immeasurable state variables. By combining adaptive backstepping technique with approximation capability of radial basis function NNs, an output-feedback adaptive NN controller is designed through backstepping approach. It is shown that the proposed controller guarantees semiglobal boundedness of all the signals in the closed-loop systems. Two examples are used to illustrate the effectiveness of the proposed approach.

  8. Radial basis function network learns ceramic processing and predicts related strength and density

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.

    1993-01-01

    Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.

  9. Formal System Verification - Extension 2

    DTIC Science & Technology

    2012-08-08

    vision of truly trustworthy systems has been to provide a formally verified microkernel basis. We have previously developed the seL4 microkernel...together with a formal proof (in the theorem prover Isabelle/HOL) of its functional correctness [6]. This means that all the behaviours of the seL4 C...source code are included in the high-level, formal specification of the kernel. This work enabled us to provide further formal guarantees about seL4 , in

  10. 7 CFR 810.802 - Definition of other terms.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) Damaged kernels. Kernels and pieces of grain kernels for which standards have been established under the.... (d) Heat-damaged kernels. Kernels and pieces of grain kernels for which standards have been...

  11. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  12. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  13. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  14. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as... purposes of determining inedible kernels, pieces, or particles of almond kernels. [59 FR 39419, Aug. 3...

  15. GPU-accelerated atmospheric chemical kinetics in the ECHAM/MESSy (EMAC) Earth system model (version 2.52)

    NASA Astrophysics Data System (ADS)

    Alvanos, Michail; Christoudias, Theodoros

    2017-10-01

    This paper presents an application of GPU accelerators in Earth system modeling. We focus on atmospheric chemical kinetics, one of the most computationally intensive tasks in climate-chemistry model simulations. We developed a software package that automatically generates CUDA kernels to numerically integrate atmospheric chemical kinetics in the global climate model ECHAM/MESSy Atmospheric Chemistry (EMAC), used to study climate change and air quality scenarios. A source-to-source compiler outputs a CUDA-compatible kernel by parsing the FORTRAN code generated by the Kinetic PreProcessor (KPP) general analysis tool. All Rosenbrock methods that are available in the KPP numerical library are supported.Performance evaluation, using Fermi and Pascal CUDA-enabled GPU accelerators, shows achieved speed-ups of 4. 5 × and 20. 4 × , respectively, of the kernel execution time. A node-to-node real-world production performance comparison shows a 1. 75 × speed-up over the non-accelerated application using the KPP three-stage Rosenbrock solver. We provide a detailed description of the code optimizations used to improve the performance including memory optimizations, control code simplification, and reduction of idle time. The accuracy and correctness of the accelerated implementation are evaluated by comparing to the CPU-only code of the application. The median relative difference is found to be less than 0.000000001 % when comparing the output of the accelerated kernel the CPU-only code.The approach followed, including the computational workload division, and the developed GPU solver code can potentially be used as the basis for hardware acceleration of numerous geoscientific models that rely on KPP for atmospheric chemical kinetics applications.

  16. Relationship between QTL for grain shape, grain weight, test weight, milling yield, and plant height in the spring wheat cross RL4452/'AC Domain'.

    PubMed

    Cabral, Adrian L; Jordan, Mark C; Larson, Gary; Somers, Daryl J; Humphreys, D Gavin; McCartney, Curt A

    2018-01-01

    Kernel morphology characteristics of wheat are complex and quantitatively inherited. A doubled haploid (DH) population of the cross RL4452/'AC Domain' was used to study the genetic basis of seed shape. Quantitative trait loci (QTL) analyses were conducted on a total of 18 traits: 14 grain shape traits, flour yield (Fyd), and three agronomic traits (Plant height [Plht], 1000 Grain weight [Gwt], Test weight [Twt]), using data from trial locations at Glenlea, Brandon, and Morden in Manitoba, Canada, between 1999 and 2004. Kernel shape was studied through digital image analysis with an Acurum® grain analyzer. Plht, Gwt, Twt, Fyd, and grain shape QTL were correlated with each other and QTL analysis revealed that QTL for these traits often mapped to the same genetic locations. The most significant QTL for the grain shape traits were located on chromosomes 4B and 4D, each accounting for up to 24.4% and 53.3% of the total phenotypic variation, respectively. In addition, the most significant QTL for Plht, Gwt, and Twt were all detected on chromosome 4D at the Rht-D1 locus. Rht-D1b decreased Plht, Gwt, Twt, and kernel width relative to the Rht-D1a allele. A narrow genetic interval on chromosome 4B contained significant QTL for grain shape, Gwt, and Plht. The 'AC Domain' allele reduced Plht, Gwt, kernel length and width traits, but had no detectable effect on Twt. The data indicated that this variation was inconsistent with segregation at Rht-B1. Numerous QTL were identified that control these traits in this population.

  17. Relationship between QTL for grain shape, grain weight, test weight, milling yield, and plant height in the spring wheat cross RL4452/‘AC Domain’

    PubMed Central

    Cabral, Adrian L.; Jordan, Mark C.; Larson, Gary; Somers, Daryl J.; Humphreys, D. Gavin

    2018-01-01

    Kernel morphology characteristics of wheat are complex and quantitatively inherited. A doubled haploid (DH) population of the cross RL4452/‘AC Domain’ was used to study the genetic basis of seed shape. Quantitative trait loci (QTL) analyses were conducted on a total of 18 traits: 14 grain shape traits, flour yield (Fyd), and three agronomic traits (Plant height [Plht], 1000 Grain weight [Gwt], Test weight [Twt]), using data from trial locations at Glenlea, Brandon, and Morden in Manitoba, Canada, between 1999 and 2004. Kernel shape was studied through digital image analysis with an Acurum® grain analyzer. Plht, Gwt, Twt, Fyd, and grain shape QTL were correlated with each other and QTL analysis revealed that QTL for these traits often mapped to the same genetic locations. The most significant QTL for the grain shape traits were located on chromosomes 4B and 4D, each accounting for up to 24.4% and 53.3% of the total phenotypic variation, respectively. In addition, the most significant QTL for Plht, Gwt, and Twt were all detected on chromosome 4D at the Rht-D1 locus. Rht-D1b decreased Plht, Gwt, Twt, and kernel width relative to the Rht-D1a allele. A narrow genetic interval on chromosome 4B contained significant QTL for grain shape, Gwt, and Plht. The ‘AC Domain’ allele reduced Plht, Gwt, kernel length and width traits, but had no detectable effect on Twt. The data indicated that this variation was inconsistent with segregation at Rht-B1. Numerous QTL were identified that control these traits in this population. PMID:29357369

  18. Ulnar Rotation Osteotomy for Congenital Radial Head Dislocation.

    PubMed

    Liu, Ruiyu; Miao, Wusheng; Mu, Mingchao; Wu, Ge; Qu, Jining; Wu, Yongtao

    2015-09-01

    To evaluate an ulnar rotation osteotomy for congenital anterior dislocation of the radial head. Nine patients (5 boys and 4 girls aged 6 to 13 years) with congenital anterior dislocation of the radial head were treated with ulnar rotation osteotomy. Magnetic resonance imaging of the elbow showed the proximal radioulnar joint on the anterior-lateral side of the ulna rather than on the lateral side in patients with congenital anterior dislocation of the radial head. On the basis of this finding, we performed an osteotomy on the ulna and laterally rotated the proximal radioulnar joint achieving radial head reduction and restoring the anatomical relationship between the radial head and the capitellum. Clinical and radiographical evaluation of the elbow was performed before surgery and at postoperative follow-up. All patients were followed for 13 to 45 months after surgery. Elbow radiography showed that the radiocapitellar joint was reduced in all patients at the last follow-up visit and that the carrying angle was decreased relative to that in the preoperative condition. Elbow stability and the range of elbow flexion motion were improved at the last follow-up. We did not observe ulnar osteotomy site nonunion or elbow osteoarthritis in these patients. Furthermore, radial head dislocation did not recur. At early follow-up, ulnar rotation osteotomy was a safe and effective method for the treatment of congenital anterior dislocation of the radial head. Therapeutic IV. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.

  19. Determination of glycaemic index; some methodological aspects related to the analysis of carbohydrate load and characteristics of the previous evening meal.

    PubMed

    Granfeldt, Y; Wu, X; Björck, I

    2006-01-01

    To determine the possible differences in glycaemic index (GI) depending on (1) the analytical method used to calculate the 'available carbohydrate' load, that is, using carbohydrates by difference (total carbohydrate by difference, minus dietary fibre (DF)) as available carbohydrates vs available starch basis (total starch minus resistant starch (RS)) of a food rich in intrinsic RS and (2) the effect of GI characteristics and/or the content of indigestible carbohydrates (RS and DF) of the evening meal prior to GI testing the following morning. Blood glucose and serum insulin responses were studied after subjects consuming (1) two levels of barley kernels rich in intrinsic RS (15.2%, total starch basis) and (2) after a standard breakfast following three different evening meals varying in GI and/or indigestible carbohydrates: pasta, barley kernels and white wheat bread, respectively. Healthy adults with normal body mass index. (1) Increasing the portion size of barley kernels from 79.6 g (50 g 'available carbohydrates') to 93.9 g (50 g available starch) to adjust for its RS content did not significantly affect the GI or insulin index (11). (2) The low GI barley evening meal, as opposed to white wheat bread and pasta evening meals, reduced the postprandial glycaemic and insulinaemic (23 and 29%, respectively, P < 0.05) areas under the curve at a standardized white bread breakfast fed the following morning. (1) Increasing portion size to compensate for the considerable portion of RS in a low GI barley product had no significant impact on GI or II. However, for GI testing, it is recommended to base carbohydrate load on specific analyses of the available carbohydrate content. (2) A low GI barley evening meal containing high levels of indigestible carbohydrates (RS and DF) substantially reduced the GI and II of white wheat bread determined at a subsequent breakfast meal.

  20. Quantitative analysis and comparative study of four cities green pattern in API system on the background of big data

    NASA Astrophysics Data System (ADS)

    Xin, YANG; Si-qi, WU; Qi, ZHANG

    2018-05-01

    Beijing, London, Paris, New York are typical cities in the world, so comparative study of four cities green pattern is very important to find out gap and advantage and to learn from each other. The paper will provide basis and new ideas for development of metropolises in China. On the background of big data, API (Application Programming Interface) system can provide extensive and accurate basic data to study urban green pattern in different geographical environment in domestic and foreign. On the basis of this, Average nearest neighbor tool, Kernel density tool and Standard Ellipse tool in ArcGIS platform can process and summarize data and realize quantitative analysis of green pattern. The paper summarized uniqueness of four cities green pattern and reasons of formation on basis of numerical comparison.

  1. Extruded Bread Classification on the Basis of Acoustic Emission Signal With Application of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Świetlicka, Izabela; Muszyński, Siemowit; Marzec, Agata

    2015-04-01

    The presented work covers the problem of developing a method of extruded bread classification with the application of artificial neural networks. Extruded flat graham, corn, and rye breads differening in water activity were used. The breads were subjected to the compression test with simultaneous registration of acoustic signal. The amplitude-time records were analyzed both in time and frequency domains. Acoustic emission signal parameters: single energy, counts, amplitude, and duration acoustic emission were determined for the breads in four water activities: initial (0.362 for rye, 0.377 for corn, and 0.371 for graham bread), 0.432, 0.529, and 0.648. For classification and the clustering process, radial basis function, and self-organizing maps (Kohonen network) were used. Artificial neural networks were examined with respect to their ability to classify or to cluster samples according to the bread type, water activity value, and both of them. The best examination results were achieved by the radial basis function network in classification according to water activity (88%), while the self-organizing maps network yielded 81% during bread type clustering.

  2. Sensitivity of coronal loop sausage mode frequencies and decay rates to radial and longitudinal density inhomogeneities: a spectral approach

    NASA Astrophysics Data System (ADS)

    Cally, Paul S.; Xiong, Ming

    2018-01-01

    Fast sausage modes in solar magnetic coronal loops are only fully contained in unrealistically short dense loops. Otherwise they are leaky, losing energy to their surrounds as outgoing waves. This causes any oscillation to decay exponentially in time. Simultaneous observations of both period and decay rate therefore reveal the eigenfrequency of the observed mode, and potentially insight into the tubes’ nonuniform internal structure. In this article, a global spectral description of the oscillations is presented that results in an implicit matrix eigenvalue equation where the eigenvalues are associated predominantly with the diagonal terms of the matrix. The off-diagonal terms vanish identically if the tube is uniform. A linearized perturbation approach, applied with respect to a uniform reference model, is developed that makes the eigenvalues explicit. The implicit eigenvalue problem is easily solved numerically though, and it is shown that knowledge of the real and imaginary parts of the eigenfrequency is sufficient to determine the width and density contrast of a boundary layer over which the tubes’ enhanced internal densities drop to ambient values. Linearized density kernels are developed that show sensitivity only to the extreme outside of the loops for radial fundamental modes, especially for small density enhancements, with no sensitivity to the core. Higher radial harmonics do show some internal sensitivity, but these will be more difficult to observe. Only kink modes are sensitive to the tube centres. Variation in internal and external Alfvén speed along the loop is shown to have little effect on the fundamental dimensionless eigenfrequency, though the associated eigenfunction becomes more compact at the loop apex as stratification increases, or may even displace from the apex.

  3. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  4. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  5. 7 CFR 981.7 - Edible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  6. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  7. Formal System Verification for Trustworthy Embedded Systems

    DTIC Science & Technology

    2011-04-19

    microkernel basis. We had previously achieved code- level formal verification of the seL4 microkernel [3]. In the present project, over 12 months with 0.6 FTE...project, we designed and implemented a secure network access device (SAC) on top of the verified seL4 microkernel. The device allows a trusted front...Engelhardt, Rafal Kolan- ski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. seL4 : Formal verification of an OS kernel. CACM, 53(6):107

  8. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    PubMed

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  9. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  10. Fault detection for hydraulic pump based on chaotic parallel RBF network

    NASA Astrophysics Data System (ADS)

    Lu, Chen; Ma, Ning; Wang, Zhipeng

    2011-12-01

    In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network) is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF) subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.

  11. A comparison of neural network architectures for the prediction of MRR in EDM

    NASA Astrophysics Data System (ADS)

    Jena, A. R.; Das, Raja

    2017-11-01

    The aim of the research work is to predict the material removal rate of a work-piece in electrical discharge machining (EDM). Here, an effort has been made to predict the material removal rate through back-propagation neural network (BPN) and radial basis function neural network (RBFN) for a work-piece of AISI D2 steel. The input parameters for the architecture are discharge-current (Ip), pulse-duration (Ton), and duty-cycle (τ) taken for consideration to obtained the output for material removal rate of the work-piece. In the architecture, it has been observed that radial basis function neural network is comparatively faster than back-propagation neural network but logically back-propagation neural network results more real value. Therefore BPN may consider as a better process in this architecture for consistent prediction to save time and money for conducting experiments.

  12. An adaptive trajectory tracking control of four rotor hover vehicle using extended normalized radial basis function network

    NASA Astrophysics Data System (ADS)

    ul Amin, Rooh; Aijun, Li; Khan, Muhammad Umer; Shamshirband, Shahaboddin; Kamsin, Amirrudin

    2017-01-01

    In this paper, an adaptive trajectory tracking controller based on extended normalized radial basis function network (ENRBFN) is proposed for 3-degree-of-freedom four rotor hover vehicle subjected to external disturbance i.e. wind turbulence. Mathematical model of four rotor hover system is developed using equations of motions and a new computational intelligence based technique ENRBFN is introduced to approximate the unmodeled dynamics of the hover vehicle. The adaptive controller based on the Lyapunov stability approach is designed to achieve tracking of the desired attitude angles of four rotor hover vehicle in the presence of wind turbulence. The adaptive weight update based on the Levenberg-Marquardt algorithm is used to avoid weight drift in case the system is exposed to external disturbances. The closed-loop system stability is also analyzed using Lyapunov stability theory. Simulations and experimental results are included to validate the effectiveness of the proposed control scheme.

  13. Estimation of Energy Expenditure Using a Patch-Type Sensor Module with an Incremental Radial Basis Function Neural Network

    PubMed Central

    Li, Meina; Kwak, Keun-Chang; Kim, Youn Tae

    2016-01-01

    Conventionally, indirect calorimetry has been used to estimate oxygen consumption in an effort to accurately measure human body energy expenditure. However, calorimetry requires the subject to wear a mask that is neither convenient nor comfortable. The purpose of our study is to develop a patch-type sensor module with an embedded incremental radial basis function neural network (RBFNN) for estimating the energy expenditure. The sensor module contains one ECG electrode and a three-axis accelerometer, and can perform real-time heart rate (HR) and movement index (MI) monitoring. The embedded incremental network includes linear regression (LR) and RBFNN based on context-based fuzzy c-means (CFCM) clustering. This incremental network is constructed by building a collection of information granules through CFCM clustering that is guided by the distribution of error of the linear part of the LR model. PMID:27669249

  14. Radial basis functions in mathematical modelling of flow boiling in minichannels

    NASA Astrophysics Data System (ADS)

    Hożejowska, Sylwia; Hożejowski, Leszek; Piasecka, Magdalena

    The paper addresses heat transfer processes in flow boiling in a vertical minichannel of 1.7 mm depth with a smooth heated surface contacting fluid. The heated element for FC-72 flowing in a minichannel was a 0.45 mm thick plate made of Haynes-230 alloy. An infrared camera positioned opposite the central, axially symmetric part of the channel measured the plate temperature. K-type thermocouples and pressure converters were installed at the inlet and outlet of the minichannel. In the study radial basis functions were used to solve a problem concerning heat transfer in a heated plate supplied with the controlled direct current. According to the model assumptions, the problem is treated as twodimensional and governed by the Poisson equation. The aim of the study lies in determining the temperature field and the heat transfer coefficient. The results were verified by comparing them with those obtained by the Trefftz method.

  15. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  16. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  17. Classification of endoscopic capsule images by using color wavelet features, higher order statistics and radial basis functions.

    PubMed

    Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L

    2008-01-01

    This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.

  18. PIG (partially ionized globule) anatomy - Density and temperature structure of the bright-rimmed globule IC 1396E

    NASA Technical Reports Server (NTRS)

    Serabyn, E.; Guesten, R.; Mundy, L.

    1993-01-01

    The density and temperature structure of the bright-rimmed cometary globule IC 1396E is estimated, and the possibility that recent internal star formation was triggered by the ionization front in its southern surface is assessed. On the basis of NH3 data, gas temperatures in the globule are found to increase outward from the center, from a minimum of 17 K in its tail to a maximum of 26 K on the surface most directly facing the stars ionizing IC 1396. On the basis of a microturbulent radiative transfer code to model the radial dependence of the CS line intensities, and also the intensities of the optically thin 2-1 and 5-4 lines toward the cloud center, a radial density dependence of r exp -1.55 to r exp -1.75 is found.

  19. The effect of texture on the shaft surface on the sealing performance of radial lip seals

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Jia, XiaoHong; Gao, Zhi; Wang, YuMing

    2014-07-01

    On the basis of elastohydrodynamic model, the present study numerically analyzes the effect of various microdimple texture shapes, namely, circular, square, oriented isosceles triangular, on the pumping rate and the friction torque of radial lip seals, and determines the microdimple texture shape that can produce positive pumping rate. The area ratio, depth and shape dimension of a single texture are the most important geometric parameters which influence the tribological performance. According to the selected texture shape, parameter analysis is conducted to determine the optimal combination for the above three parameters. Simultaneously, the simulated performances of radial lip seal with texture on the shaft surface are compared with those of the conventional lip seal without any texture on the shaft surface.

  20. Exploiting graph kernels for high performance biomedical relation extraction.

    PubMed

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM performed better than APG kernel for the BioInfer dataset, in the Area Under Curve (AUC) measure (74% vs 69%). However, for all the other PPI datasets, namely AIMed, HPRD50, IEPA and LLL, ASM is substantially outperformed by the APG kernel in F-score and AUC measures. We demonstrate a high performance Chemical Induced Disease relation extraction, without employing external knowledge sources or task specific heuristics. Our work shows that graph kernels are effective in extracting relations that are expressed in multiple sentences. We also show that the graph kernels, namely the ASM and APG kernels, substantially outperform the tree kernels. Among the graph kernels, we showed the ASM kernel as effective for biomedical relation extraction, with comparable performance to the APG kernel for datasets such as the CID-sentence level relation extraction and BioInfer in PPI. Overall, the APG kernel is shown to be significantly more accurate than the ASM kernel, achieving better performance on most datasets.

  1. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  2. Target oriented dimensionality reduction of hyperspectral data by Kernel Fukunaga-Koontz Transform

    NASA Astrophysics Data System (ADS)

    Binol, Hamidullah; Ochilov, Shuhrat; Alam, Mohammad S.; Bal, Abdullah

    2017-02-01

    Principal component analysis (PCA) is a popular technique in remote sensing for dimensionality reduction. While PCA is suitable for data compression, it is not necessarily an optimal technique for feature extraction, particularly when the features are exploited in supervised learning applications (Cheriyadat and Bruce, 2003) [1]. Preserving features belonging to the target is very crucial to the performance of target detection/recognition techniques. Fukunaga-Koontz Transform (FKT) based supervised band reduction technique can be used to provide this requirement. FKT achieves feature selection by transforming into a new space in where feature classes have complimentary eigenvectors. Analysis of these eigenvectors under two classes, target and background clutter, can be utilized for target oriented band reduction since each basis functions best represent target class while carrying least information of the background class. By selecting few eigenvectors which are the most relevant to the target class, dimension of hyperspectral data can be reduced and thus, it presents significant advantages for near real time target detection applications. The nonlinear properties of the data can be extracted by kernel approach which provides better target features. Thus, we propose constructing kernel FKT (KFKT) to present target oriented band reduction. The performance of the proposed KFKT based target oriented dimensionality reduction algorithm has been tested employing two real-world hyperspectral data and results have been reported consequently.

  3. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  4. 7 CFR 810.2202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kernels, foreign material, and shrunken and broken kernels. The sum of these three factors may not exceed... the removal of dockage and shrunken and broken kernels. (g) Heat-damaged kernels. Kernels, pieces of... sample after the removal of dockage and shrunken and broken kernels. (h) Other grains. Barley, corn...

  5. 7 CFR 981.8 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  6. 7 CFR 51.1415 - Inedible kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or otherwise...

  7. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  8. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    PubMed

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Unconventional protein sources: apricot seed kernels.

    PubMed

    Gabrial, G N; El-Nahry, F I; Awadalla, M Z; Girgis, S M

    1981-09-01

    Hamawy apricot seed kernels (sweet), Amar apricot seed kernels (bitter) and treated Amar apricot kernels (bitterness removed) were evaluated biochemically. All kernels were found to be high in fat (42.2--50.91%), protein (23.74--25.70%) and fiber (15.08--18.02%). Phosphorus, calcium, and iron were determined in all experimental samples. The three different apricot seed kernels were used for extensive study including the qualitative determination of the amino acid constituents by acid hydrolysis, quantitative determination of some amino acids, and biological evaluation of the kernel proteins in order to use them as new protein sources. Weanling albino rats failed to grow on diets containing the Amar apricot seed kernels due to low food consumption because of its bitterness. There was no loss in weight in that case. The Protein Efficiency Ratio data and blood analysis results showed the Hamawy apricot seed kernels to be higher in biological value than treated apricot seed kernels. The Net Protein Ratio data which accounts for both weight, maintenance and growth showed the treated apricot seed kernels to be higher in biological value than both Hamawy and Amar kernels. The Net Protein Ratio for the last two kernels were nearly equal.

  10. An introduction to kernel-based learning algorithms.

    PubMed

    Müller, K R; Mika, S; Rätsch, G; Tsuda, K; Schölkopf, B

    2001-01-01

    This paper provides an introduction to support vector machines, kernel Fisher discriminant analysis, and kernel principal component analysis, as examples for successful kernel-based learning methods. We first give a short background about Vapnik-Chervonenkis theory and kernel feature spaces and then proceed to kernel based learning in supervised and unsupervised scenarios including practical and algorithmic considerations. We illustrate the usefulness of kernel algorithms by discussing applications such as optical character recognition and DNA analysis.

  11. The complex variable reproducing kernel particle method for bending problems of thin plates on elastic foundations

    NASA Astrophysics Data System (ADS)

    Chen, L.; Cheng, Y. M.

    2018-07-01

    In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.

  12. 7 CFR 981.408 - Inedible kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  13. Design of CT reconstruction kernel specifically for clinical lung imaging

    NASA Astrophysics Data System (ADS)

    Cody, Dianna D.; Hsieh, Jiang; Gladish, Gregory W.

    2005-04-01

    In this study we developed a new reconstruction kernel specifically for chest CT imaging. An experimental flat-panel CT scanner was used on large dogs to produce 'ground-truth" reference chest CT images. These dogs were also examined using a clinical 16-slice CT scanner. We concluded from the dog images acquired on the clinical scanner that the loss of subtle lung structures was due mostly to the presence of the background noise texture when using currently available reconstruction kernels. This qualitative evaluation of the dog CT images prompted the design of a new recon kernel. This new kernel consisted of the combination of a low-pass and a high-pass kernel to produce a new reconstruction kernel, called the 'Hybrid" kernel. The performance of this Hybrid kernel fell between the two kernels on which it was based, as expected. This Hybrid kernel was also applied to a set of 50 patient data sets; the analysis of these clinical images is underway. We are hopeful that this Hybrid kernel will produce clinical images with an acceptable tradeoff of lung detail, reliable HU, and image noise.

  14. Quality changes in macadamia kernel between harvest and farm-gate.

    PubMed

    Walton, David A; Wallace, Helen M

    2011-02-01

    Macadamia integrifolia, Macadamia tetraphylla and their hybrids are cultivated for their edible kernels. After harvest, nuts-in-shell are partially dried on-farm and sorted to eliminate poor-quality kernels before consignment to a processor. During these operations, kernel quality may be lost. In this study, macadamia nuts-in-shell were sampled at five points of an on-farm postharvest handling chain from dehusking to the final storage silo to assess quality loss prior to consignment. Shoulder damage, weight of pieces and unsound kernel were assessed for raw kernels, and colour, mottled colour and surface damage for roasted kernels. Shoulder damage, weight of pieces and unsound kernel for raw kernels increased significantly between the dehusker and the final silo. Roasted kernels displayed a significant increase in dark colour, mottled colour and surface damage during on-farm handling. Significant loss of macadamia kernel quality occurred on a commercial farm during sorting and storage of nuts-in-shell before nuts were consigned to a processor. Nuts-in-shell should be dried as quickly as possible and on-farm handling minimised to maintain optimum kernel quality. 2010 Society of Chemical Industry.

  15. Vibration control of uncertain multiple launch rocket system using radial basis function neural network

    NASA Astrophysics Data System (ADS)

    Li, Bo; Rui, Xiaoting

    2018-01-01

    Poor dispersion characteristics of rockets due to the vibration of Multiple Launch Rocket System (MLRS) have always restricted the MLRS development for several decades. Vibration control is a key technique to improve the dispersion characteristics of rockets. For a mechanical system such as MLRS, the major difficulty in designing an appropriate control strategy that can achieve the desired vibration control performance is to guarantee the robustness and stability of the control system under the occurrence of uncertainties and nonlinearities. To approach this problem, a computed torque controller integrated with a radial basis function neural network is proposed to achieve the high-precision vibration control for MLRS. In this paper, the vibration response of a computed torque controlled MLRS is described. The azimuth and elevation mechanisms of the MLRS are driven by permanent magnet synchronous motors and supposed to be rigid. First, the dynamic model of motor-mechanism coupling system is established using Lagrange method and field-oriented control theory. Then, in order to deal with the nonlinearities, a computed torque controller is designed to control the vibration of the MLRS when it is firing a salvo of rockets. Furthermore, to compensate for the lumped uncertainty due to parametric variations and un-modeled dynamics in the design of the computed torque controller, a radial basis function neural network estimator is developed to adapt the uncertainty based on Lyapunov stability theory. Finally, the simulated results demonstrate the effectiveness of the proposed control system and show that the proposed controller is robust with regard to the uncertainty.

  16. Norm overlap between many-body states: Uncorrelated overlap between arbitrary Bogoliubov product states

    NASA Astrophysics Data System (ADS)

    Bally, B.; Duguet, T.

    2018-02-01

    Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed-form expression allowing its computation without any phase ambiguity. The formula is physically intuitive, accurate, and versatile. It equally applies to norm overlaps between Bogoliubov states of even or odd number parity. Numerical applications illustrate these features and provide a transparent representation of the content of the norm overlaps. Conclusions: The complex norm overlap between arbitrary Bogoliubov states is computed, without any phase ambiguity, via elementary linear algebra operations. The method can be used in any configuration mixing of orthogonal and non-orthogonal product states. Furthermore, the closed-form expression extends naturally to correlated overlaps at play in PNR-BCC and PNR-BMBPT. As such, the straight overlap between Bogoliubov states is the zero-order reduction of more involved norm kernels to be studied in a forthcoming paper.

  17. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  18. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    PubMed

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  19. Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.

    2014-12-01

    Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.

  20. Development of Antimicrobial Peptide Prediction Tool for Aquaculture Industries.

    PubMed

    Gautam, Aditi; Sharma, Asuda; Jaiswal, Sarika; Fatma, Samar; Arora, Vasu; Iquebal, M A; Nandi, S; Sundaray, J K; Jayasankar, P; Rai, Anil; Kumar, Dinesh

    2016-09-01

    Microbial diseases in fish, plant, animal and human are rising constantly; thus, discovery of their antidote is imperative. The use of antibiotic in aquaculture further compounds the problem by development of resistance and consequent consumer health risk by bio-magnification. Antimicrobial peptides (AMPs) have been highly promising as natural alternative to chemical antibiotics. Though AMPs are molecules of innate immune defense of all advance eukaryotic organisms, fish being heavily dependent on their innate immune defense has been a good source of AMPs with much wider applicability. Machine learning-based prediction method using wet laboratory-validated fish AMP can accelerate the AMP discovery using available fish genomic and proteomic data. Earlier AMP prediction servers are based on multi-phyla/species data, and we report here the world's first AMP prediction server in fishes. It is freely accessible at http://webapp.cabgrid.res.in/fishamp/ . A total of 151 AMPs related to fish collected from various databases and published literature were taken for this study. For model development and prediction, N-terminus residues, C-terminus residues and full sequences were considered. Best models were with kernels polynomial-2, linear and radial basis function with accuracy of 97, 99 and 97 %, respectively. We found that performance of support vector machine-based models is superior to artificial neural network. This in silico approach can drastically reduce the time and cost of AMP discovery. This accelerated discovery of lead AMP molecules having potential wider applications in diverse area like fish and human health as substitute of antibiotics, immunomodulator, antitumor, vaccine adjuvant and inactivator, and also for packaged food can be of much importance for industries.

  1. Preliminary research on the identification system for anthracnose and powdery mildew of sandalwood leaf based on image processing

    PubMed Central

    Wang, Xuefeng

    2017-01-01

    This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees. PMID:28749977

  2. Supervised classification of brain tissues through local multi-scale texture analysis by coupling DIR and FLAIR MR sequences

    NASA Astrophysics Data System (ADS)

    Poletti, Enea; Veronese, Elisa; Calabrese, Massimiliano; Bertoldo, Alessandra; Grisan, Enrico

    2012-02-01

    The automatic segmentation of brain tissues in magnetic resonance (MR) is usually performed on T1-weighted images, due to their high spatial resolution. T1w sequence, however, has some major downsides when brain lesions are present: the altered appearance of diseased tissues causes errors in tissues classification. In order to overcome these drawbacks, we employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. We propose here a supervised classification scheme that does not require any anatomical a priori information to identify the 3 classes, "GM", "WM", and "background". Features are extracted by means of a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. Hence, the total number of features associated to a pixel is 56 (9 textures x3 scales x2 sequences +2 original pixel values). The classifier employed is a Support Vector Machine with Radial Basis Function as kernel. From each of the 4 brain volumes evaluated, a DIR and a FLAIR slice have been selected and manually segmented by 2 expert neurologists, providing 1st and 2nd human reference observations which agree with an average accuracy of 99.03%. SVM performances have been assessed with a 4-fold cross-validation, yielding an average classification accuracy of 98.79%.

  3. Preliminary research on the identification system for anthracnose and powdery mildew of sandalwood leaf based on image processing.

    PubMed

    Wu, Chunyan; Wang, Xuefeng

    2017-01-01

    This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees.

  4. Predicting radiologists' true and false positive decisions in reading mammograms by using gaze parameters and image-based features

    NASA Astrophysics Data System (ADS)

    Gandomkar, Ziba; Tay, Kevin; Ryder, Will; Brennan, Patrick C.; Mello-Thoms, Claudia

    2016-03-01

    Radiologists' gaze-related parameters combined with image-based features were utilized to classify suspicious mammographic areas ultimately scored as True Positives (TP) and False Positives (FP). Eight breast radiologists read 120 two-view digital mammograms of which 59 had biopsy proven cancer. Eye tracking data was collected and nearby fixations were clustered together. Suspicious areas on mammograms were independently identified based on thresholding an intensity saliency map followed by automatic segmentation and pruning steps. For each radiologist reported area, radiologist's fixation clusters in the area, as well as neighboring suspicious areas within 2.5° of the center of fixation, were found. A 45-dimensional feature vector containing gaze parameters of the corresponding cluster along with image-based characteristics was constructed. Gaze parameters included total number of fixations in the cluster, dwell time, time to hit the cluster for the first time, maximum number of consecutive fixations, and saccade magnitude of the first fixation in the cluster. Image-based features consisted of intensity, shape, and texture descriptors extracted from the region around the suspicious area, its surrounding tissue, and the entire breast. For each radiologist, a userspecific Support Vector Machine (SVM) model was built to classify the reported areas as TPs or FPs. Leave-one-out cross validation was utilized to avoid over-fitting. A feature selection step was embedded in the SVM training procedure by allowing radial basis function kernels to have 45 scaling factors. The proposed method was compared with the radiologists' performance using the jackknife alternative free-response receiver operating characteristic (JAFROC). The JAFROC figure of merit increased significantly for six radiologists.

  5. Abnormal hubs of white matter networks in the frontal-parieto circuit contribute to depression discrimination via pattern classification.

    PubMed

    Qin, Jiaolong; Wei, Maobin; Liu, Haiyan; Chen, Jianhuai; Yan, Rui; Hua, Lingling; Zhao, Ke; Yao, Zhijian; Lu, Qing

    2014-12-01

    Previous studies had explored the diagnostic and prognostic value of the structural neuroimaging data of MDD and treated the whole brain voxels, the fractional anisotropy and the structural connectivity as classification features. To our best knowledge, no study examined the potential diagnostic value of the hubs of anatomical brain networks in MDD. The purpose of the current study was to provide an exploratory examination of the potential diagnostic and prognostic values of hubs of white matter brain networks in MDD discrimination and the corresponding impaired hub pattern via a multi-pattern analysis. We constructed white matter brain networks from 29 depressions and 30 healthy controls based on diffusion tensor imaging data, calculated nodal measures and identified hubs. Using these measures as features, two types of feature architectures were established, one only included hubs (HUB) and the other contained both hubs and non hubs. The support vector machine classifiers with Gaussian radial basis kernel were used after the feature selection. Moreover, the relative contribution of the features was estimated by means of the consensus features. Our results presented that the hubs (including the bilateral dorsolateral part of superior frontal gyrus, the left middle frontal gyrus, the bilateral middle temporal gyrus, and the bilateral inferior temporal gyrus) played an important role in distinguishing the depressions from healthy controls with the best accuracy of 83.05%. Moreover, most of the HUB consensus features located in the frontal-parieto circuit. These findings provided evidence that the hubs could be served as valuable potential diagnostic measure for MDD, and the hub-concentrated lesion distribution of MDD was primarily anchored within the frontal-parieto circuit. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  7. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  8. A support vector machine for predicting defibrillation outcomes from waveform metrics.

    PubMed

    Howe, Andrew; Escalona, Omar J; Di Maio, Rebecca; Massot, Bertrand; Cromie, Nick A; Darragh, Karen M; Adgey, Jennifer; McEneaney, David J

    2014-03-01

    Algorithms to predict shock success based on VF waveform metrics could significantly enhance resuscitation by optimising the timing of defibrillation. To investigate robust methods of predicting defibrillation success in VF cardiac arrest patients, by using a support vector machine (SVM) optimisation approach. Frequency-domain (AMSA, dominant frequency and median frequency) and time-domain (slope and RMS amplitude) VF waveform metrics were calculated in a 4.1Y window prior to defibrillation. Conventional prediction test validity of each waveform parameter was conducted and used AUC>0.6 as the criterion for inclusion as a corroborative attribute processed by the SVM classification model. The latter used a Gaussian radial-basis-function (RBF) kernel and the error penalty factor C was fixed to 1. A two-fold cross-validation resampling technique was employed. A total of 41 patients had 115 defibrillation instances. AMSA, slope and RMS waveform metrics performed test validation with AUC>0.6 for predicting termination of VF and return-to-organised rhythm. Predictive accuracy of the optimised SVM design for termination of VF was 81.9% (± 1.24 SD); positive and negative predictivity were respectively 84.3% (± 1.98 SD) and 77.4% (± 1.24 SD); sensitivity and specificity were 87.6% (± 2.69 SD) and 71.6% (± 9.38 SD) respectively. AMSA, slope and RMS were the best VF waveform frequency-time parameters predictors of termination of VF according to test validity assessment. This a priori can be used for a simplified SVM optimised design that combines the predictive attributes of these VF waveform metrics for improved prediction accuracy and generalisation performance without requiring the definition of any threshold value on waveform metrics. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. Effect of normalization methods on the performance of supervised learning algorithms applied to HTSeq-FPKM-UQ data sets: 7SK RNA expression as a predictor of survival in patients with colon adenocarcinoma.

    PubMed

    Shahriyari, Leili

    2017-11-03

    One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Improving Classification of Cancer and Mining Biomarkers from Gene Expression Profiles Using Hybrid Optimization Algorithms and Fuzzy Support Vector Machine

    PubMed Central

    Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Garshasbi, Masoud

    2018-01-01

    Background: Gene expression data are characteristically high dimensional with a small sample size in contrast to the feature size and variability inherent in biological processes that contribute to difficulties in analysis. Selection of highly discriminative features decreases the computational cost and complexity of the classifier and improves its reliability for prediction of a new class of samples. Methods: The present study used hybrid particle swarm optimization and genetic algorithms for gene selection and a fuzzy support vector machine (SVM) as the classifier. Fuzzy logic is used to infer the importance of each sample in the training phase and decrease the outlier sensitivity of the system to increase the ability to generalize the classifier. A decision-tree algorithm was applied to the most frequent genes to develop a set of rules for each type of cancer. This improved the abilities of the algorithm by finding the best parameters for the classifier during the training phase without the need for trial-and-error by the user. The proposed approach was tested on four benchmark gene expression profiles. Results: Good results have been demonstrated for the proposed algorithm. The classification accuracy for leukemia data is 100%, for colon cancer is 96.67% and for breast cancer is 98%. The results show that the best kernel used in training the SVM classifier is the radial basis function. Conclusions: The experimental results show that the proposed algorithm can decrease the dimensionality of the dataset, determine the most informative gene subset, and improve classification accuracy using the optimal parameters of the classifier with no user interface. PMID:29535919

  11. Insights into multimodal imaging classification of ADHD

    PubMed Central

    Colby, John B.; Rudie, Jeffrey D.; Brown, Jesse A.; Douglas, Pamela K.; Cohen, Mark S.; Shehzad, Zarrar

    2012-01-01

    Attention deficit hyperactivity disorder (ADHD) currently is diagnosed in children by clinicians via subjective ADHD-specific behavioral instruments and by reports from the parents and teachers. Considering its high prevalence and large economic and societal costs, a quantitative tool that aids in diagnosis by characterizing underlying neurobiology would be extremely valuable. This provided motivation for the ADHD-200 machine learning (ML) competition, a multisite collaborative effort to investigate imaging classifiers for ADHD. Here we present our ML approach, which used structural and functional magnetic resonance imaging data, combined with demographic information, to predict diagnostic status of individuals with ADHD from typically developing (TD) children across eight different research sites. Structural features included quantitative metrics from 113 cortical and non-cortical regions. Functional features included Pearson correlation functional connectivity matrices, nodal and global graph theoretical measures, nodal power spectra, voxelwise global connectivity, and voxelwise regional homogeneity. We performed feature ranking for each site and modality using the multiple support vector machine recursive feature elimination (SVM-RFE) algorithm, and feature subset selection by optimizing the expected generalization performance of a radial basis function kernel SVM (RBF-SVM) trained across a range of the top features. Site-specific RBF-SVMs using these optimal feature sets from each imaging modality were used to predict the class labels of an independent hold-out test set. A voting approach was used to combine these multiple predictions and assign final class labels. With this methodology we were able to predict diagnosis of ADHD with 55% accuracy (versus a 39% chance level in this sample), 33% sensitivity, and 80% specificity. This approach also allowed us to evaluate predictive structural and functional features giving insight into abnormal brain circuitry in ADHD. PMID:22912605

  12. A Transcriptional Signature of Fatigue Derived from Patients with Primary Sjögren's Syndrome.

    PubMed

    James, Katherine; Al-Ali, Shereen; Tarn, Jessica; Cockell, Simon J; Gillespie, Colin S; Hindmarsh, Victoria; Locke, James; Mitchell, Sheryl; Lendrem, Dennis; Bowman, Simon; Price, Elizabeth; Pease, Colin T; Emery, Paul; Lanyon, Peter; Hunter, John A; Gupta, Monica; Bombardieri, Michele; Sutcliffe, Nurhan; Pitzalis, Costantino; McLaren, John; Cooper, Annie; Regan, Marian; Giles, Ian; Isenberg, David; Saravanan, Vadivelu; Coady, David; Dasgupta, Bhaskar; McHugh, Neil; Young-Min, Steven; Moots, Robert; Gendi, Nagui; Akil, Mohammed; Griffiths, Bridget; Wipat, Anil; Newton, Julia; Jones, David E; Isaacs, John; Hallinan, Jennifer; Ng, Wan-Fai

    2015-01-01

    Fatigue is a debilitating condition with a significant impact on patients' quality of life. Fatigue is frequently reported by patients suffering from primary Sjögren's Syndrome (pSS), a chronic autoimmune condition characterised by dryness of the eyes and the mouth. However, although fatigue is common in pSS, it does not manifest in all sufferers, providing an excellent model with which to explore the potential underpinning biological mechanisms. Whole blood samples from 133 fully-phenotyped pSS patients stratified for the presence of fatigue, collected by the UK primary Sjögren's Syndrome Registry, were used for whole genome microarray. The resulting data were analysed both on a gene by gene basis and using pre-defined groups of genes. Finally, gene set enrichment analysis (GSEA) was used as a feature selection technique for input into a support vector machine (SVM) classifier. Classification was assessed using area under curve (AUC) of receiver operator characteristic and standard error of Wilcoxon statistic, SE(W). Although no genes were individually found to be associated with fatigue, 19 metabolic pathways were enriched in the high fatigue patient group using GSEA. Analysis revealed that these enrichments arose from the presence of a subset of 55 genes. A radial kernel SVM classifier with this subset of genes as input displayed significantly improved performance over classifiers using all pathway genes as input. The classifiers had AUCs of 0.866 (SE(W) 0.002) and 0.525 (SE(W) 0.006), respectively. Systematic analysis of gene expression data from pSS patients discordant for fatigue identified 55 genes which are predictive of fatigue level using SVM classification. This list represents the first step in understanding the underlying pathophysiological mechanisms of fatigue in patients with pSS.

  13. Evaluation of extreme learning machine for classification of individual and combined finger movements using electromyography on amputees and non-amputees.

    PubMed

    Anam, Khairul; Al-Jumaily, Adel

    2017-01-01

    The success of myoelectric pattern recognition (M-PR) mostly relies on the features extracted and classifier employed. This paper proposes and evaluates a fast classifier, extreme learning machine (ELM), to classify individual and combined finger movements on amputees and non-amputees. ELM is a single hidden layer feed-forward network (SLFN) that avoids iterative learning by determining input weights randomly and output weights analytically. Therefore, it can accelerate the training time of SLFNs. In addition to the classifier evaluation, this paper evaluates various feature combinations to improve the performance of M-PR and investigate some feature projections to improve the class separability of the features. Different from other studies on the implementation of ELM in the myoelectric controller, this paper presents a complete and thorough investigation of various types of ELMs including the node-based and kernel-based ELM. Furthermore, this paper provides comparisons of ELMs and other well-known classifiers such as linear discriminant analysis (LDA), k-nearest neighbour (kNN), support vector machine (SVM) and least-square SVM (LS-SVM). The experimental results show the most accurate ELM classifier is radial basis function ELM (RBF-ELM). The comparison of RBF-ELM and other well-known classifiers shows that RBF-ELM is as accurate as SVM and LS-SVM but faster than the SVM family; it is superior to LDA and kNN. The experimental results also indicate that the accuracy gap of the M-PR on the amputees and non-amputees is not too much with the accuracy of 98.55% on amputees and 99.5% on the non-amputees using six electromyography (EMG) channels. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Classification of interstitial lung disease patterns with topological texture features

    NASA Astrophysics Data System (ADS)

    Huber, Markus B.; Nagarajan, Mahesh; Leinsinger, Gerda; Ray, Lawrence A.; Wismüller, Axel

    2010-03-01

    Topological texture features were compared in their ability to classify morphological patterns known as 'honeycombing' that are considered indicative for the presence of fibrotic interstitial lung diseases in high-resolution computed tomography (HRCT) images. For 14 patients with known occurrence of honey-combing, a stack of 70 axial, lung kernel reconstructed images were acquired from HRCT chest exams. A set of 241 regions of interest of both healthy and pathological (89) lung tissue were identified by an experienced radiologist. Texture features were extracted using six properties calculated from gray-level co-occurrence matrices (GLCM), Minkowski Dimensions (MDs), and three Minkowski Functionals (MFs, e.g. MF.euler). A k-nearest-neighbor (k-NN) classifier and a Multilayer Radial Basis Functions Network (RBFN) were optimized in a 10-fold cross-validation for each texture vector, and the classification accuracy was calculated on independent test sets as a quantitative measure of automated tissue characterization. A Wilcoxon signed-rank test was used to compare two accuracy distributions and the significance thresholds were adjusted for multiple comparisons by the Bonferroni correction. The best classification results were obtained by the MF features, which performed significantly better than all the standard GLCM and MD features (p < 0.005) for both classifiers. The highest accuracy was found for MF.euler (97.5%, 96.6%; for the k-NN and RBFN classifier, respectively). The best standard texture features were the GLCM features 'homogeneity' (91.8%, 87.2%) and 'absolute value' (90.2%, 88.5%). The results indicate that advanced topological texture features can provide superior classification performance in computer-assisted diagnosis of interstitial lung diseases when compared to standard texture analysis methods.

  15. Identification of type 2 diabetes-associated combination of SNPs using support vector machine.

    PubMed

    Ban, Hyo-Jeong; Heo, Jee Yeon; Oh, Kyung-Soo; Park, Keun-Joon

    2010-04-23

    Type 2 diabetes mellitus (T2D), a metabolic disorder characterized by insulin resistance and relative insulin deficiency, is a complex disease of major public health importance. Its incidence is rapidly increasing in the developed countries. Complex diseases are caused by interactions between multiple genes and environmental factors. Most association studies aim to identify individual susceptibility single markers using a simple disease model. Recent studies are trying to estimate the effects of multiple genes and multi-locus in genome-wide association. However, estimating the effects of association is very difficult. We aim to assess the rules for classifying diseased and normal subjects by evaluating potential gene-gene interactions in the same or distinct biological pathways. We analyzed the importance of gene-gene interactions in T2D susceptibility by investigating 408 single nucleotide polymorphisms (SNPs) in 87 genes involved in major T2D-related pathways in 462 T2D patients and 456 healthy controls from the Korean cohort studies. We evaluated the support vector machine (SVM) method to differentiate between cases and controls using SNP information in a 10-fold cross-validation test. We achieved a 65.3% prediction rate with a combination of 14 SNPs in 12 genes by using the radial basis function (RBF)-kernel SVM. Similarly, we investigated subpopulation data sets of men and women and identified different SNP combinations with the prediction rates of 70.9% and 70.6%, respectively. As the high-throughput technology for genome-wide SNPs improves, it is likely that a much higher prediction rate with biologically more interesting combination of SNPs can be acquired by using this method. Support Vector Machine based feature selection method in this research found novel association between combinations of SNPs and T2D in a Korean population.

  16. Development and Testing of a Radial Halbach Magnetic Bearing

    NASA Technical Reports Server (NTRS)

    Eichenberg, Dennis J.; Gallo, Christopher A.; Thompson, William K.

    2006-01-01

    The NASA John H. Glenn Research Center has developed and tested a revolutionary Radial Halbach Magnetic Bearing. The objective of this work is to develop a viable non-contact magnetic bearing utilizing Halbach arrays for all-electric flight, and many other applications. This concept will help reduce harmful emissions, reduce the Nation s dependence on fossil fuels and mitigate many of the concerns and limitations encountered in conventional axial bearings such as bearing wear, leaks, seals and friction loss. The Radial Halbach Magnetic Bearing is inherently stable and requires no active feedback control system or superconductivity as required in many magnetic bearing designs. The Radial Halbach Magnetic Bearing is useful for very high speed applications including turbines, instrumentation, medical applications, manufacturing equipment, and space power systems such as flywheels. Magnetic fields suspend and support a rotor assembly within a stator. Advanced technologies developed for particle accelerators, and currently under development for maglev trains and rocket launchers, served as the basis for this application. Experimental hardware was successfully designed and developed to validate the basic principles and analyses. The report concludes that the implementation of Radial Halbach Magnetic Bearings can provide significant improvements in rotational system performance and reliability.

  17. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  18. Anthraquinones isolated from the browned Chinese chestnut kernels (Castanea mollissima blume)

    NASA Astrophysics Data System (ADS)

    Zhang, Y. L.; Qi, J. H.; Qin, L.; Wang, F.; Pang, M. X.

    2016-08-01

    Anthraquinones (AQS) represent a group of secondary metallic products in plants. AQS are often naturally occurring in plants and microorganisms. In a previous study, we found that AQS were produced by enzymatic browning reaction in Chinese chestnut kernels. To find out whether non-enzymatic browning reaction in the kernels could produce AQS too, AQS were extracted from three groups of chestnut kernels: fresh kernels, non-enzymatic browned kernels, and browned kernels, and the contents of AQS were determined. High performance liquid chromatography (HPLC) and nuclear magnetic resonance (NMR) methods were used to identify two compounds of AQS, rehein(1) and emodin(2). AQS were barely exists in the fresh kernels, while both browned kernel groups sample contained a high amount of AQS. Thus, we comfirmed that AQS could be produced during both enzymatic and non-enzymatic browning process. Rhein and emodin were the main components of AQS in the browned kernels.

  19. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    PubMed

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P < 0.05) but the unbroken kernels became significantly harder. Moisture content and moisture uptake rate were positively correlated, and cooked rice hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  20. Graph Theory Roots of Spatial Operators for Kinematics and Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan

    2011-01-01

    Spatial operators have been used to analyze the dynamics of robotic multibody systems and to develop novel computational dynamics algorithms. Mass matrix factorization, inversion, diagonalization, and linearization are among several new insights obtained using such operators. While initially developed for serial rigid body manipulators, the spatial operators and the related mathematical analysis have been shown to extend very broadly including to tree and closed topology systems, to systems with flexible joints, links, etc. This work uses concepts from graph theory to explore the mathematical foundations of spatial operators. The goal is to study and characterize the properties of the spatial operators at an abstract level so that they can be applied to a broader range of dynamics problems. The rich mathematical properties of the kinematics and dynamics of robotic multibody systems has been an area of strong research interest for several decades. These properties are important to understand the inherent physical behavior of systems, for stability and control analysis, for the development of computational algorithms, and for model development of faithful models. Recurring patterns in spatial operators leads one to ask the more abstract question about the properties and characteristics of spatial operators that make them so broadly applicable. The idea is to step back from the specific application systems, and understand more deeply the generic requirements and properties of spatial operators, so that the insights and techniques are readily available across different kinematics and dynamics problems. In this work, techniques from graph theory were used to explore the abstract basis for the spatial operators. The close relationship between the mathematical properties of adjacency matrices for graphs and those of spatial operators and their kernels were established. The connections hold across very basic requirements on the system topology, the nature of the component bodies, the indexing schemes, etc. The relationship of the underlying structure is intimately connected with efficient, recursive computational algorithms. The results provide the foundational groundwork for a much broader look at the key problems in kinematics and dynamics. The properties of general graphs and trees of nodes and edge were examined, as well as the properties of adjacency matrices that are used to describe graph connectivity. The nilpotency property of such matrices for directed trees was reviewed, and the adjacency matrices were generalized to the notion of block weighted adjacency matrices that support block matrix elements. This leads us to the development of the notion of Spatial Kernel Operator SKO kernels. These kernels provide the basis for the development of SKO resolvent operators.

  1. Surrogate models for sheet metal stamping problem based on the combination of proper orthogonal decomposition and radial basis function

    NASA Astrophysics Data System (ADS)

    Dang, Van Tuan; Lafon, Pascal; Labergere, Carl

    2017-10-01

    In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.

  2. Nonlinear Deep Kernel Learning for Image Annotation.

    PubMed

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  3. Multineuron spike train analysis with R-convolution linear combination kernel.

    PubMed

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    NASA Astrophysics Data System (ADS)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  5. Numerical simulation of intelligent compaction technology for construction quality control.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  6. Prediction of cis/trans isomerization in proteins using PSI-BLAST profiles and secondary structure information.

    PubMed

    Song, Jiangning; Burrage, Kevin; Yuan, Zheng; Huber, Thomas

    2006-03-09

    The majority of peptide bonds in proteins are found to occur in the trans conformation. However, for proline residues, a considerable fraction of Prolyl peptide bonds adopt the cis form. Proline cis/trans isomerization is known to play a critical role in protein folding, splicing, cell signaling and transmembrane active transport. Accurate prediction of proline cis/trans isomerization in proteins would have many important applications towards the understanding of protein structure and function. In this paper, we propose a new approach to predict the proline cis/trans isomerization in proteins using support vector machine (SVM). The preliminary results indicated that using Radial Basis Function (RBF) kernels could lead to better prediction performance than that of polynomial and linear kernel functions. We used single sequence information of different local window sizes, amino acid compositions of different local sequences, multiple sequence alignment obtained from PSI-BLAST and the secondary structure information predicted by PSIPRED. We explored these different sequence encoding schemes in order to investigate their effects on the prediction performance. The training and testing of this approach was performed on a newly enlarged dataset of 2424 non-homologous proteins determined by X-Ray diffraction method using 5-fold cross-validation. Selecting the window size 11 provided the best performance for determining the proline cis/trans isomerization based on the single amino acid sequence. It was found that using multiple sequence alignments in the form of PSI-BLAST profiles could significantly improve the prediction performance, the prediction accuracy increased from 62.8% with single sequence to 69.8% and Matthews Correlation Coefficient (MCC) improved from 0.26 with single local sequence to 0.40. Furthermore, if coupled with the predicted secondary structure information by PSIPRED, our method yielded a prediction accuracy of 71.5% and MCC of 0.43, 9% and 0.17 higher than the accuracy achieved based on the singe sequence information, respectively. A new method has been developed to predict the proline cis/trans isomerization in proteins based on support vector machine, which used the single amino acid sequence with different local window sizes, the amino acid compositions of local sequence flanking centered proline residues, the position-specific scoring matrices (PSSMs) extracted by PSI-BLAST and the predicted secondary structures generated by PSIPRED. The successful application of SVM approach in this study reinforced that SVM is a powerful tool in predicting proline cis/trans isomerization in proteins and biological sequence analysis.

  7. Textural analysis of early-phase spatiotemporal changes in contrast enhancement of breast lesions imaged with an ultrafast DCE-MRI protocol.

    PubMed

    Milenković, Jana; Dalmış, Mehmet Ufuk; Žgajnar, Janez; Platel, Bram

    2017-09-01

    New ultrafast view-sharing sequences have enabled breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to be performed at high spatial and temporal resolution. The aim of this study is to evaluate the diagnostic potential of textural features that quantify the spatiotemporal changes of the contrast-agent uptake in computer-aided diagnosis of malignant and benign breast lesions imaged with high spatial and temporal resolution DCE-MRI. The proposed approach is based on the textural analysis quantifying the spatial variation of six dynamic features of the early-phase contrast-agent uptake of a lesion's largest cross-sectional area. The textural analysis is performed by means of the second-order gray-level co-occurrence matrix, gray-level run-length matrix and gray-level difference matrix. This yields 35 textural features to quantify the spatial variation of each of the six dynamic features, providing a feature set of 210 features in total. The proposed feature set is evaluated based on receiver operating characteristic (ROC) curve analysis in a cross-validation scheme for random forests (RF) and two support vector machine classifiers, with linear and radial basis function (RBF) kernel. Evaluation is done on a dataset with 154 breast lesions (83 malignant and 71 benign) and compared to a previous approach based on 3D morphological features and the average and standard deviation of the same dynamic features over the entire lesion volume as well as their average for the smaller region of the strongest uptake rate. The area under the ROC curve (AUC) obtained by the proposed approach with the RF classifier was 0.8997, which was significantly higher (P = 0.0198) than the performance achieved by the previous approach (AUC = 0.8704) on the same dataset. Similarly, the proposed approach obtained a significantly higher result for both SVM classifiers with RBF (P = 0.0096) and linear kernel (P = 0.0417) obtaining AUC of 0.8876 and 0.8548, respectively, compared to AUC values of previous approach of 0.8562 and 0.8311, respectively. The proposed approach based on 2D textural features quantifying spatiotemporal changes of the contrast-agent uptake significantly outperforms the previous approach based on 3D morphology and dynamic analysis in differentiating the malignant and benign breast lesions, showing its potential to aid clinical decision making. © 2017 American Association of Physicists in Medicine.

  8. Cooling characteristics of air cooled radial turbine blades

    NASA Astrophysics Data System (ADS)

    Sato, T.; Takeishi, K.; Matsuura, M.; Miyauchi, J.

    The cooling design and the cooling characteristics of air cooled radial turbine wheels, which are designed for use with the gas generator turbine for the 400 horse power truck gas turbine engine, are presented. A high temperature and high speed test was performed under aerodynamically similar conditions to that of the prototype engine in order to confirm the metal temperature of the newly developed integrated casting wheels constructed of the superalloys INCO 713C. The test results compared with the analytical value, which was established on the basis of the results of the heat transfer test and the water flow test, are discussed.

  9. 7 CFR 981.9 - Kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  10. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Multiple kernels learning-based biological entity relationship extraction method.

    PubMed

    Dongliang, Xu; Jingchang, Pan; Bailing, Wang

    2017-09-20

    Automatic extracting protein entity interaction information from biomedical literature can help to build protein relation network and design new drugs. There are more than 20 million literature abstracts included in MEDLINE, which is the most authoritative textual database in the field of biomedicine, and follow an exponential growth over time. This frantic expansion of the biomedical literature can often be difficult to absorb or manually analyze. Thus efficient and automated search engines are necessary to efficiently explore the biomedical literature using text mining techniques. The P, R, and F value of tag graph method in Aimed corpus are 50.82, 69.76, and 58.61%, respectively. The P, R, and F value of tag graph kernel method in other four evaluation corpuses are 2-5% higher than that of all-paths graph kernel. And The P, R and F value of feature kernel and tag graph kernel fuse methods is 53.43, 71.62 and 61.30%, respectively. The P, R and F value of feature kernel and tag graph kernel fuse methods is 55.47, 70.29 and 60.37%, respectively. It indicated that the performance of the two kinds of kernel fusion methods is better than that of simple kernel. In comparison with the all-paths graph kernel method, the tag graph kernel method is superior in terms of overall performance. Experiments show that the performance of the multi-kernels method is better than that of the three separate single-kernel method and the dual-mutually fused kernel method used hereof in five corpus sets.

  12. 7 CFR 51.2295 - Half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  13. 7 CFR 810.206 - Grades and grade requirements for barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... weight per bushel (pounds) Sound barley (percent) Maximum Limits of— Damaged kernels 1 (percent) Heat damaged kernels (percent) Foreign material (percent) Broken kernels (percent) Thin barley (percent) U.S... or otherwise of distinctly low quality. 1 Includes heat-damaged kernels. Injured-by-frost kernels and...

  14. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  15. 7 CFR 51.1449 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Kernel which is “dark amber” or darker color; (e) Kernel having more than one dark kernel spot, or one dark kernel spot more than one-eighth inch in greatest dimension; (f) Shriveling when the surface of the kernel is very conspicuously wrinkled; (g) Internal flesh discoloration of a medium shade of gray...

  16. 7 CFR 51.2125 - Split or broken kernels.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Split or broken kernels. 51.2125 Section 51.2125 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... kernels. Split or broken kernels means seven-eighths or less of complete whole kernels but which will not...

  17. 7 CFR 51.2296 - Three-fourths half kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  18. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  19. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  20. Detection of maize kernels breakage rate based on K-means clustering

    NASA Astrophysics Data System (ADS)

    Yang, Liang; Wang, Zhuo; Gao, Lei; Bai, Xiaoping

    2017-04-01

    In order to optimize the recognition accuracy of maize kernels breakage detection and improve the detection efficiency of maize kernels breakage, this paper using computer vision technology and detecting of the maize kernels breakage based on K-means clustering algorithm. First, the collected RGB images are converted into Lab images, then the original images clarity evaluation are evaluated by the energy function of Sobel 8 gradient. Finally, the detection of maize kernels breakage using different pixel acquisition equipments and different shooting angles. In this paper, the broken maize kernels are identified by the color difference between integrity kernels and broken kernels. The original images clarity evaluation and different shooting angles are taken to verify that the clarity and shooting angles of the images have a direct influence on the feature extraction. The results show that K-means clustering algorithm can distinguish the broken maize kernels effectively.

  1. Modeling adaptive kernels from probabilistic phylogenetic trees.

    PubMed

    Nicotra, Luca; Micheli, Alessio

    2009-01-01

    Modeling phylogenetic interactions is an open issue in many computational biology problems. In the context of gene function prediction we introduce a class of kernels for structured data leveraging on a hierarchical probabilistic modeling of phylogeny among species. We derive three kernels belonging to this setting: a sufficient statistics kernel, a Fisher kernel, and a probability product kernel. The new kernels are used in the context of support vector machine learning. The kernels adaptivity is obtained through the estimation of the parameters of a tree structured model of evolution using as observed data phylogenetic profiles encoding the presence or absence of specific genes in a set of fully sequenced genomes. We report results obtained in the prediction of the functional class of the proteins of the budding yeast Saccharomyces cerevisae which favorably compare to a standard vector based kernel and to a non-adaptive tree kernel function. A further comparative analysis is performed in order to assess the impact of the different components of the proposed approach. We show that the key features of the proposed kernels are the adaptivity to the input domain and the ability to deal with structured data interpreted through a graphical model representation.

  2. Aflatoxin and nutrient contents of peanut collected from local market and their processed foods

    NASA Astrophysics Data System (ADS)

    Ginting, E.; Rahmianna, A. A.; Yusnawan, E.

    2018-01-01

    Peanut is succeptable to aflatoxin contamination and the sources of peanut as well as processing methods considerably affect aflatoxin content of the products. Therefore, the study on aflatoxin and nutrient contents of peanut collected from local market and their processed foods were performed. Good kernels of peanut were prepared into fried peanut, pressed-fried peanut, peanut sauce, peanut press cake, fermented peanut press cake (tempe) and fried tempe, while blended kernels (good and poor kernels) were processed into peanut sauce and tempe and poor kernels were only processed into tempe. The results showed that good and blended kernels which had high number of sound/intact kernels (82,46% and 62,09%), contained 9.8-9.9 ppb of aflatoxin B1, while slightly higher level was seen in poor kernels (12.1 ppb). However, the moisture, ash, protein, and fat contents of the kernels were similar as well as the products. Peanut tempe and fried tempe showed the highest increase in protein content, while decreased fat contents were seen in all products. The increase in aflatoxin B1 of peanut tempe prepared from poor kernels > blended kernels > good kernels. However, it averagely decreased by 61.2% after deep-fried. Excluding peanut tempe and fried tempe, aflatoxin B1 levels in all products derived from good kernels were below the permitted level (15 ppb). This suggests that sorting peanut kernels as ingredients and followed by heat processing would decrease the aflatoxin content in the products.

  3. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  4. Highly efficient model updating for structural condition assessment of large-scale bridges.

    DOT National Transportation Integrated Search

    2015-02-01

    For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...

  5. The Interpolation Theory of Radial Basis Functions

    NASA Astrophysics Data System (ADS)

    Baxter, Brad

    2010-06-01

    In this dissertation, it is first shown that, when the radial basis function is a p-norm and 1 < p < 2, interpolation is always possible when the points are all different and there are at least two of them. We then show that interpolation is not always possible when p > 2. Specifically, for every p > 2, we construct a set of different points in some Rd for which the interpolation matrix is singular. The greater part of this work investigates the sensitivity of radial basis function interpolants to changes in the function values at the interpolation points. Our early results show that it is possible to recast the work of Ball, Narcowich and Ward in the language of distributional Fourier transforms in an elegant way. We then use this language to study the interpolation matrices generated by subsets of regular grids. In particular, we are able to extend the classical theory of Toeplitz operators to calculate sharp bounds on the spectra of such matrices. Applying our understanding of these spectra, we construct preconditioners for the conjugate gradient solution of the interpolation equations. Our main result is that the number of steps required to achieve solution of the linear system to within a required tolerance can be independent of the number of interpolation points. The Toeplitz structure allows us to use fast Fourier transform techniques, which imp lies that the total number of operations is a multiple of n log n, where n is the number of interpolation points. Finally, we use some of our methods to study the behaviour of the multiquadric when its shape parameter increases to infinity. We find a surprising link with the sinus cardinalis or sinc function of Whittaker. Consequently, it can be highly useful to use a large shape parameter when approximating band-limited functions.

  6. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  7. Triterpene glycosides and other polar constituents of shea (Vitellaria paradoxa) kernels and their bioactivities.

    PubMed

    Zhang, Jie; Kurita, Masahiro; Shinozaki, Takuro; Ukiya, Motohiko; Yasukawa, Ken; Shimizu, Naoto; Tokuda, Harukuni; Masters, Eliot T; Akihisa, Momoko; Akihisa, Toshihiro

    2014-12-01

    The MeOH extract of defatted shea (Vitellaria paradoxa; Sapotaceae) kernels was investigated for its constituents, and fifteen oleanane-type triterpene acids and glycosides, two steroid glucosides, two pentane-2,4-diol glucosides, seven phenolic compounds, and three sugars, were isolated. The structures of five triterpene glycosides were elucidated on the basis of spectroscopic and chemical methods. Upon evaluation of the bioactivity of the isolated compounds, it was found that some or most of the compounds have potent or moderate inhibitory activities against the following: melanogenesis in B16 melanoma cells induced by α-melanocyte-stimulating hormone (α-MSH); generation of 1,1-diphenyl-2-picrylhydrazyl (DPPH) radicals, against Epstein-Barr virus early antigen (EBV-EA) activation induced by 12-O-teradecanoylphorbol 13-acetate (TPA) in Raji cells; t TPA-induced inflammation in mice, and proliferation of one or more of HL-60, A549, AZ521, and SK-BR-3 human cancer cell lines, respectively. Western blot analysis established that paradoxoside E inhibits melanogenesis by regulation of expression of microphthalmia-associated transcription factor (MITF), tyrosinase, and tyrosinase-related protein-1 (TRP-1) and TRP-2. In addition, tieghemelin A was demonstrated to exhibit cytotoxic activity against A549 cells (IC50 13.5 μM) mainly due to induction of apoptosis by flow cytometry. The extract of defatted shea kernels and its constituents may be, therefore, valuable as potential antioxidant, anti-inflammatory, skin-whitening, chemopreventive, and anticancer agents. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Limits on transverse momentum dependent evolution from semi-inclusive deep inelastic scattering at moderate Q

    NASA Astrophysics Data System (ADS)

    Aidala, C. A.; Field, B.; Gamberg, L. P.; Rogers, T. C.

    2014-05-01

    In the QCD evolution of transverse momentum dependent parton distribution and fragmentation functions, the Collins-Soper evolution kernel includes both a perturbative short-distance contribution and a large-distance nonperturbative, but strongly universal, contribution. In the past, global fits, based mainly on larger Q Drell-Yan-like processes, have found substantial contributions from nonperturbative regions in the Collins-Soper evolution kernel. In this article, we investigate semi-inclusive deep inelastic scattering measurements in the region of relatively small Q, of the order of a few GeV, where sensitivity to nonperturbative transverse momentum dependence may become more important or even dominate the evolution. Using recently available deep inelastic scattering data from the COMPASS experiment, we provide estimates of the regions of coordinate space that dominate in transverse momentum dependent (TMD) processes when the hard scale is of the order of only a few GeV. We find that distance scales that are much larger than those commonly probed in large Q measurements become important, suggesting that the details of nonperturbative effects in TMD evolution are especially significant in the region of intermediate Q. We highlight the strongly universal nature of the nonperturbative component of evolution and its potential to be tightly constrained by fits from a wide variety of observables that include both large and moderate Q. On this basis, we recommend detailed treatments of the nonperturbative component of the Collins-Soper evolution kernel for future TMD studies.

  9. Absolute proper motions to B approximately 22.5: Evidence for kimematical substructure in halo field stars

    NASA Technical Reports Server (NTRS)

    Majewski, Steven R.; Munn, Jeffrey A.; Hawley, Suzanne L.

    1994-01-01

    Radial velocities have been obtained for six of nine stars identified on the basis of similar distances and common, extreme transverse velocities in the proper motion survey of Majewski (1992) as a candidate halo moving group at the north Galactic pole. These radial velocities correspond to velocities perpendicular to the Galactic plane which span the range -48 +/- 21 to -128 +/- 9 km/sec (but a smaller range, -48 +/- 21 to -86 +/- 19 km/sec, when only our own measurements are considered), significantly different than the expected distribution, with mean 0 km/sec, for a random sample of either halo or thick disk stars. The probability of picking such a set of radial velocities at random is less than 1%. Thus the radial velocity data support the hypothesis that these stars constitute part of a halo moving group or star stream at a distance of approximately 4-5 kpc above the Galactic plane. If real, this moving group is evidence for halo phase space substructure which may be the fossil remains of a destroyed globular cluster, Galactic satellite, or Searle & Zinn (1978) 'fragment.'

  10. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  11. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  12. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... based on the analysis of a 1,000 gram sample taken from a lot of almonds weighing 10,000 pounds with less than 95 percent kernels, and a 1,000 gram sample taken from a lot of almonds weighing 10,000... percent kernels containing the following: Edible kernels, 530 grams; inedible kernels, 120 grams; foreign...

  15. 7 CFR 51.1441 - Half-kernel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  16. 7 CFR 51.1403 - Kernel color classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  17. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  18. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  19. 7 CFR 51.1450 - Serious damage.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...; (c) Decay affecting any portion of the kernel; (d) Insects, web, or frass or any distinct evidence of insect feeding on the kernel; (e) Internal discoloration which is dark gray, dark brown, or black and...) Dark kernel spots when more than three are on the kernel, or when any dark kernel spot or the aggregate...

  20. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    PubMed

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature

    PubMed Central

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems. PMID:29099838

  2. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    PubMed

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  3. Distributed smoothed tree kernel for protein-protein interaction extraction from the biomedical literature.

    PubMed

    Murugesan, Gurusamy; Abdulkadhar, Sabenabanu; Natarajan, Jeyakumar

    2017-01-01

    Automatic extraction of protein-protein interaction (PPI) pairs from biomedical literature is a widely examined task in biological information extraction. Currently, many kernel based approaches such as linear kernel, tree kernel, graph kernel and combination of multiple kernels has achieved promising results in PPI task. However, most of these kernel methods fail to capture the semantic relation information between two entities. In this paper, we present a special type of tree kernel for PPI extraction which exploits both syntactic (structural) and semantic vectors information known as Distributed Smoothed Tree kernel (DSTK). DSTK comprises of distributed trees with syntactic information along with distributional semantic vectors representing semantic information of the sentences or phrases. To generate robust machine learning model composition of feature based kernel and DSTK were combined using ensemble support vector machine (SVM). Five different corpora (AIMed, BioInfer, HPRD50, IEPA, and LLL) were used for evaluating the performance of our system. Experimental results show that our system achieves better f-score with five different corpora compared to other state-of-the-art systems.

  4. Effects of various factors on Doppler ultrasonographic measurements of radial and coccygeal arterial blood pressure in privately owned, conscious cats.

    PubMed

    Whittemore, Jacqueline C; Nystrom, Michael R; Mawby, Dianne I

    2017-04-01

    OBJECTIVE To assess the effects of age, body condition score (BCS), and muscle condition score (MCS) on radial and coccygeal systolic arterial blood pressure (SAP) in cats. DESIGN Prospective randomized trial. ANIMALS 66 privately owned cats enrolled between May and December 2010. PROCEDURES BCS and MCS of cats were assessed by 2 investigators; SAP was measured via Doppler ultrasonic flow detector, with cats positioned in right lateral or sternal recumbency for measurements at the radial or coccygeal artery, respectively, with order of site randomized. Associations among variables were assessed through correlation coefficients, partial correlation coefficients, and ANCOVA. RESULTS Interrater reliability for BCS and MCS assessment was high (correlation coefficients, 0.95 and 0.83, respectively). No significant effect was identified for order of SAP measurement sites. Coccygeal and radial SAP were positively correlated (ρ = 0.45). The median difference in coccygeal versus radial SAP was 19 mm Hg, but differences were not consistently positive or negative. Radial SAP was positively correlated with age (ρ = 0.48) and negatively correlated with MCS (ρ = -0.30). On the basis of the correlation analysis, the association between radial SAP and MCS reflected the confounding influence of age. Coccygeal SAP was not significantly correlated with age, BCS, or MCS. CONCLUSIONS AND CLINICAL RELEVANCE Use of the coccygeal artery is recommended to reduce the confounding effects of age and sarcopenia on Doppler ultrasonographic SAP measurements in cats. Additionally, monitoring for changes in MCS is recommended for cats undergoing serial SAP measurement.

  5. Cosmic Ray Diffusion Tensor throughout the Heliosphere on the basis of Nearly Incompressible Magnetohydrodynamic Turbulence Model

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Zank, G. P.; Adhikari, L.

    2017-12-01

    The radial and rigidity dependence of cosmic ray (CR) diffusion tensor is investigated on the basis of the recently developed 2D and slab turbulence transport model using nearly incompressible (NI) theory (Zank et al. 2017; Adhikari et al. 2017). We use the energy in forward propagating modes from 0.29 to 1 AU and in backward propagating modes from 1 to 75 AU. We employ the quasi-linear theory (QLT) and nonlinear guiding center (NLGC) theory, respectively, to determine the parallel and perpendicular elements of CR diffusion tensor. We also present the effect of both weak and moderately strong turbulence on the drift element of CR diffusion tensor. We find that (1) from 0.29 to 1 AU the radial mean free path (mfp) is dominated by the parallel component, both increase slowly after 0.4 AU; (2) from 1 to 75 AU the radial mfp starts with a rapid increase and then decreases after a peak at about 3.5 AU, mainly caused by pick-up ion sources of turbulence model; (3) after 20 AU the perpendicular mfp is nearly constant and begin to dominate the radial mfp; (4) the rigidity dependence of the parallel mfp is proportional to at 1 AU from 0.1 to 10 GV and the perpendicular mfp is weakly influenced by the rigidity; (5) turbulence does more than suppress the traditional drift element but introduces a new component normal to the magnetic field. This study shows that a proper two-component turbulence model is necessary to produce the complexity of diffusion coefficient for CR modulation throughout the heliosphere.

  6. A framework for optimal kernel-based manifold embedding of medical image data.

    PubMed

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. High-definition multidetector computed tomography for evaluation of coronary artery stents: comparison to standard-definition 64-detector row computed tomography.

    PubMed

    Min, James K; Swaminathan, Rajesh V; Vass, Melissa; Gallagher, Scott; Weinsaft, Jonathan W

    2009-01-01

    The assessment of coronary stents with present-generation 64-detector row computed tomography scanners that use filtered backprojection and operating at standard definition of 0.5-0.75 mm (standard definition, SDCT) is limited by imaging artifacts and noise. We evaluated the performance of a novel, high-definition 64-slice CT scanner (HDCT), with improved spatial resolution (0.23 mm) and applied statistical iterative reconstruction (ASIR) for evaluation of coronary artery stents. HDCT and SDCT stent imaging was performed with the use of an ex vivo phantom. HDCT was compared with SDCT with both smooth and sharp kernels for stent intraluminal diameter, intraluminal area, and image noise. Intrastent visualization was assessed with an ASIR algorithm on HDCT scans, compared with the filtered backprojection algorithms by SDCT. Six coronary stents (2.5, 2.5, 2.75, 3.0, 3.5, 4.0mm) were analyzed by 2 independent readers. Interobserver correlation was high for both HDCT and SDCT. HDCT yielded substantially larger luminal area visualization compared with SDCT, both for smooth (29.4+/-14.5 versus 20.1+/-13.0; P<0.001) and sharp (32.0+/-15.2 versus 25.5+/-12.0; P<0.001) kernels. Stent diameter was higher with HDCT compared with SDCT, for both smooth (1.54+/-0.59 versus1.00+/-0.50; P<0.0001) and detailed (1.47+/-0.65 versus 1.08+/-0.54; P<0.0001) kernels. With detailed kernels, HDCT scans that used algorithms showed a trend toward decreased image noise compared with SDCT-filtered backprojection algorithms. On the basis of this ex vivo study, HDCT provides superior detection of intrastent luminal area and diameter visualization, compared with SDCT. ASIR image reconstruction techniques for HDCT scans enhance the in-stent assessment while decreasing image noise.

  8. Time reversal seismic imaging using laterally reflected surface waves in southern California

    NASA Astrophysics Data System (ADS)

    Tape, C.; Liu, Q.; Tromp, J.; Plesch, A.; Shaw, J. H.

    2010-12-01

    We use observed post-surface-wave seismic waveforms to image shallow (upper 10 km) lateral reflectors in southern California. Our imaging technique employs the 3D crustal model m16 of Tape et al. (2009), which is accurate for most local earthquakes over the period range 2-30 s. Model m16 captures the resonance of the major sedimentary basins in southern California, as well as some lateral surface wave reflections associated with these basins. We apply a 3D Gaussian smoothing function (12 km horizontal, 2 km vertical) to model m16. This smoothing has the effect of suppressing synthetic waveforms within the period range of interest (3-10 s) that are associated with reflections (single and multiple) from these basins. The smoothed 3D model serves as the background model within which we propagate an ``adjoint wavefield'' comprised of time-reversed windows of post-surface-wave coda waveforms that are initiated at the respective station locations. This adjoint wavefield constructively interferes with the regular wavefield in the locations of potential reflectors. The potential reflectors are revealed in an ``event kernel,'' which is the time-integrated volumetric field for each earthquake. By summing (or ``stacking'') the event kernels from 28 well-recorded earthquakes, we identify several reflectors using this imaging procedure. The most prominent lateral reflectors occur in proximity to: the southernmost San Joaquin basin, the Los Angeles basin, the San Pedro basin, the Ventura basin, the Manix basin, the San Clemente--Santa Cruz--Santa Barbara ridge, and isolated segments of the San Jacinto and San Andreas faults. The correspondence between observed coherent coda waveforms and the imaged reflectors provides a solid basis for interpreting the kernel features as material contrasts. The 3D spatial extent and amplitude of the kernel features provide constraints on the geometry and material contrast of the imaged reflectors.

  9. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong

    Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I→I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed amore » new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.« less

  11. Conjunction of radial basis function interpolator and artificial intelligence models for time-space modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin

    2017-05-01

    As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.

  12. Seismic modeling with radial basis function-generated finite differences (RBF-FD) – a simplified treatment of interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less

  13. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    NASA Astrophysics Data System (ADS)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  14. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  15. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  16. Radial basis function neural networks in non-destructive determination of compound aspirin tablets on NIR spectroscopy.

    PubMed

    Dou, Ying; Mi, Hong; Zhao, Lingzhi; Ren, Yuqiu; Ren, Yulin

    2006-09-01

    The application of the second most popular artificial neural networks (ANNs), namely, the radial basis function (RBF) networks, has been developed for quantitative analysis of drugs during the last decade. In this paper, the two components (aspirin and phenacetin) were simultaneously determined in compound aspirin tablets by using near-infrared (NIR) spectroscopy and RBF networks. The total database was randomly divided into a training set (50) and a testing set (17). Different preprocessing methods (standard normal variate (SNV), multiplicative scatter correction (MSC), first-derivative and second-derivative) were applied to two sets of NIR spectra of compound aspirin tablets with different concentrations of two active components and compared each other. After that, the performance of RBF learning algorithm adopted the nearest neighbor clustering algorithm (NNCA) and the criterion for selection used a cross-validation technique. Results show that using RBF networks to quantificationally analyze tablets is reliable, and the best RBF model was obtained by first-derivative spectra.

  17. Energy levels of a hydrogenic impurity in a parabolic quantum well with a magnetic field

    NASA Astrophysics Data System (ADS)

    Zang, J. X.; Rustgi, M. L.

    1993-07-01

    In this paper, we present a calculation of the energy levels of a hydrogenic impurity (or a hydrogenic atom) at the bottom of a one-dimensional parabolic quantum well with a magnetic field normal to the plane of the well. The finite-basis-set variational method is used to calculate the ground state and the excited states with major quantum number less than or equal to 3. The limit of small radial distance and the limit of great radial distance are considered to choose a set of proper basis functions. The results in the limit that the parabolic parameter α=0 are compared with the data of Rösner et al. [J. Phys. B 17, 29 (1984)]. The comparison shows that the present calculation is quite accurate. It is found that the energy levels increase with increasing parabolic parameter α and increase with increasing normalized magnetic-field strength γ except those levels with magnetic quantum number m<0 at small γ.

  18. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  19. An Efficient Radial Basis Function Mesh Deformation Scheme within an Adjoint-Based Aerodynamic Optimization Framework

    NASA Astrophysics Data System (ADS)

    Poirier, Vincent

    Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.

  20. Seismic modeling with radial basis function-generated finite differences (RBF-FD) - a simplified treatment of interfaces

    NASA Astrophysics Data System (ADS)

    Martin, Bradley; Fornberg, Bengt

    2017-04-01

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.

  1. A radial velocity survey of the open cluster IC 4665

    NASA Technical Reports Server (NTRS)

    Prosser, Charles F.; Giampapa, Mark S.

    1994-01-01

    A radial velocity survey of the open cluster IC 4665 is reported for a group of candidate members previously identified on the basis of proper motion and photometry. Of those candidates observed, 20 out of 42 have radial velocities consistent with membership; these cluster members populate the F5-K0 dwarf region and represent the first relatively conclusive membership determinations for such solar-type stars in IC 4665. Three new spectroscopic binary members of the cluster have been identified. Rotational velocities have also been derived; the v sin i distribution among IC 4665 members reveals that most apparent G dwarf members of IC 4665 are seen to exhibit substantial rotation (v sin i greater than 10 km/s). When compared to evolutionary isochrones, the current list of intermediate-mass members appears to support earlier suggestions that IC 4665 has an age comparable to the Pleiades.

  2. The three-dimensional steady radial expansion of a viscous gas from a sonic source into a vacuum.

    NASA Technical Reports Server (NTRS)

    Bush, W. B.; Rosen, R.

    1971-01-01

    The three-dimensional steady radial expansion of a viscous, heat-conducting, compressible fluid from a spherical sonic source into a vacuum is analyzed using the Navier-Stokes equations as a basis. It is assumed that the model fluid is a perfect gas having constant specific heats, a constant Prandtl number of order unity, and viscosity coefficients varying as a power of the absolute temperature. Limiting forms for the flow variable solutions are studied for the Reynolds number based on the sonic source conditions, going to infinity and the Newtonian parameter both fixed and going to zero. For the case of the viscosity-temperature exponent between .5 and 1, it is shown that the velocity as well as the pressure approach zero as the radial distance goes to infinity. The formulations of the distinct regions that span the domain extending from the sonic source to the vacuum are presented.

  3. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  4. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  5. 7 CFR 810.202 - Definition of other terms.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... barley kernels, other grains, and wild oats that are badly shrunken and distinctly discolored black or... kernels. Kernels and pieces of barley kernels that are distinctly indented, immature or shrunken in...

  6. graphkernels: R and Python packages for graph comparison

    PubMed Central

    Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-01-01

    Abstract Summary Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. Availability and implementation The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. Contact mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch Supplementary information Supplementary data are available online at Bioinformatics. PMID:29028902

  7. Aflatoxin variability in pistachios.

    PubMed Central

    Mahoney, N E; Rodriguez, S B

    1996-01-01

    Pistachio fruit components, including hulls (mesocarps and epicarps), seed coats (testas), and kernels (seeds), all contribute to variable aflatoxin content in pistachios. Fresh pistachio kernels were individually inoculated with Aspergillus flavus and incubated 7 or 10 days. Hulled, shelled kernels were either left intact or wounded prior to inoculation. Wounded kernels, with or without the seed coat, were readily colonized by A. flavus and after 10 days of incubation contained 37 times more aflatoxin than similarly treated unwounded kernels. The aflatoxin levels in the individual wounded pistachios were highly variable. Neither fungal colonization nor aflatoxin was detected in intact kernels without seed coats. Intact kernels with seed coats had limited fungal colonization and low aflatoxin concentrations compared with their wounded counterparts. Despite substantial fungal colonization of wounded hulls, aflatoxin was not detected in hulls. Aflatoxin levels were significantly lower in wounded kernels with hulls than in kernels of hulled pistachios. Both the seed coat and a water-soluble extract of hulls suppressed aflatoxin production by A. flavus. PMID:8919781

  8. graphkernels: R and Python packages for graph comparison.

    PubMed

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  9. Investigation of various energy deposition kernel refinements for the convolution/superposition method

    PubMed Central

    Huang, Jessie Y.; Eklund, David; Childress, Nathan L.; Howell, Rebecca M.; Mirkovic, Dragan; Followill, David S.; Kry, Stephen F.

    2013-01-01

    Purpose: Several simplifications used in clinical implementations of the convolution/superposition (C/S) method, specifically, density scaling of water kernels for heterogeneous media and use of a single polyenergetic kernel, lead to dose calculation inaccuracies. Although these weaknesses of the C/S method are known, it is not well known which of these simplifications has the largest effect on dose calculation accuracy in clinical situations. The purpose of this study was to generate and characterize high-resolution, polyenergetic, and material-specific energy deposition kernels (EDKs), as well as to investigate the dosimetric impact of implementing spatially variant polyenergetic and material-specific kernels in a collapsed cone C/S algorithm. Methods: High-resolution, monoenergetic water EDKs and various material-specific EDKs were simulated using the EGSnrc Monte Carlo code. Polyenergetic kernels, reflecting the primary spectrum of a clinical 6 MV photon beam at different locations in a water phantom, were calculated for different depths, field sizes, and off-axis distances. To investigate the dosimetric impact of implementing spatially variant polyenergetic kernels, depth dose curves in water were calculated using two different implementations of the collapsed cone C/S method. The first method uses a single polyenergetic kernel, while the second method fully takes into account spectral changes in the convolution calculation. To investigate the dosimetric impact of implementing material-specific kernels, depth dose curves were calculated for a simplified titanium implant geometry using both a traditional C/S implementation that performs density scaling of water kernels and a novel implementation using material-specific kernels. Results: For our high-resolution kernels, we found good agreement with the Mackie et al. kernels, with some differences near the interaction site for low photon energies (<500 keV). For our spatially variant polyenergetic kernels, we found that depth was the most dominant factor affecting the pattern of energy deposition; however, the effects of field size and off-axis distance were not negligible. For the material-specific kernels, we found that as the density of the material increased, more energy was deposited laterally by charged particles, as opposed to in the forward direction. Thus, density scaling of water kernels becomes a worse approximation as the density and the effective atomic number of the material differ more from water. Implementation of spatially variant, polyenergetic kernels increased the percent depth dose value at 25 cm depth by 2.1%–5.8% depending on the field size, while implementation of titanium kernels gave 4.9% higher dose upstream of the metal cavity (i.e., higher backscatter dose) and 8.2% lower dose downstream of the cavity. Conclusions: Of the various kernel refinements investigated, inclusion of depth-dependent and metal-specific kernels into the C/S method has the greatest potential to improve dose calculation accuracy. Implementation of spatially variant polyenergetic kernels resulted in a harder depth dose curve and thus has the potential to affect beam modeling parameters obtained in the commissioning process. For metal implants, the C/S algorithms generally underestimate the dose upstream and overestimate the dose downstream of the implant. Implementation of a metal-specific kernel mitigated both of these errors. PMID:24320507

  10. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    PubMed

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. [Anatomy study on micro transverse flap pedicled with superfical palmar branch of radial artery from palmar wrist].

    PubMed

    Zhao, Min; Tian, Dehu; Shao, Xinzhong; Li, Dacun; Li, Jianfeng; Liu, Jingda; Zhao, Liang; Li, Hailei; Wang, Xiaolei; Zhang, Wentong; Wu, Jinying; Yuan, Zuoxiong

    2013-07-01

    To study the anatomical basis of micro transverse flap pedicled with the superfical palmar branch of radial artery from the palmar wrist for using this free flap to repair soft tissue defect of the finger. Thirty-eight fresh upper limb specimens (22 males and 16 females; aged 26-72 years with an average of 36 years; at left and right sides in 19 limbs respectively) were dissected and observed under operating microscope. Two specimens were made into casting mould of artery with bones, and 2 specimens were injected with red emulsion in radial artery. Thirty-four specimens were injected with 1% gentian violet solution in the superfical palmar branch of the radial artery. A transverse oval flap in the palmar wrist was designed, the axis of the flap was the distal palmar crease. The origin, distribution, and anastomosis of the superfical palmar branch of the radial artery were observed. The superficial palmar branch of the radial artery was constantly existed, it usually arises from the main trunk of the radial artery, 1.09-3.60 cm to proximal styloid process of radius. There were about 2-5 branches between the origin and the tubercle of scaphoid bone. The origin diameter was 1.00-3.00 mm, and the distal diameter at the styloid process of radius was 1.00-2.90 mm. The venous return of flap passed through 2 routes, and the innervations of the flap mainly from the palmar cutaneous branch of the median nerve. The area of the flap was 4 cm x 2 cm-6 cm x 2 cm. The origin and courses of the superficial palmar branch of the radial artery is constant, and its diameter is similar to that of the digital artery. A transverse oval flap pedicled with the superfical palmar branch of radial artery in the palmar wrist can be designed to repair defects of the finger.

  12. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    ERIC Educational Resources Information Center

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  13. 7 CFR 810.204 - Grades and grade requirements for Six-rowed Malting barley and Six-rowed Blue Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...— Damaged kernels 1 (percent) Foreign material (percent) Other grains (percent) Skinned and broken kernels....0 10.0 15.0 1 Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered against sound barley. Notes: Malting barley shall not be infested in accordance with...

  14. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  15. 7 CFR 51.1413 - Damage.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... well cured; (e) Poorly developed kernels; (f) Kernels which are dark amber in color; (g) Kernel spots when more than one dark spot is present on either half of the kernel, or when any such spot is more...

  16. 7 CFR 810.205 - Grades and grade requirements for Two-rowed Malting barley.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (percent) Maximum limits of— Wild oats (percent) Foreign material (percent) Skinned and broken kernels... Injured-by-frost kernels and injured-by-mold kernels are not considered damaged kernels or considered...

  17. Characterizing the radial content of orbital-angular-momentum photonic states impaired by weak-to-strong atmospheric turbulence.

    PubMed

    Chen, Chunyi; Yang, Huamin

    2016-08-22

    The changes in the radial content of orbital-angular-momentum (OAM) photonic states described by Laguerre-Gaussian (LG) modes with a radial index of zero, suffering from turbulence-induced distortions, are explored by numerical simulations. For a single-photon field with a given LG mode propagating through weak-to-strong atmospheric turbulence, both the average LG and OAM mode densities are dependent only on two nondimensional parameters, i.e., the Fresnel ratio and coherence-width-to-beam-radius (CWBR) ratio. It is found that atmospheric turbulence causes the radially-adjacent-mode mixing, besides the azimuthally-adjacent-mode mixing, in the propagated photonic states; the former is relatively slighter than the latter. With the same Fresnel ratio, the probabilities that a photon can be found in the zero-index radial mode of intended OAM states in terms of the relative turbulence strength behave very similarly; a smaller Fresnel ratio leads to a slower decrease in the probabilities as the relative turbulence strength increases. A photon can be found in various radial modes with approximately equal probability when the relative turbulence strength turns great enough. The use of a single-mode fiber in OAM measurements can result in photon loss and hence alter the observed transition probability between various OAM states. The bit error probability in OAM-based free-space optical communication systems that transmit photonic modes belonging to the same orthogonal LG basis may depend on what digit is sent.

  18. Detection of ochratoxin A contamination in stored wheat using near-infrared hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Senthilkumar, T.; Jayas, D. S.; White, N. D. G.; Fields, P. G.; Gräfenhan, T.

    2017-03-01

    Near-infrared (NIR) hyperspectral imaging system was used to detect five concentration levels of ochratoxin A (OTA) in contaminated wheat kernels. The wheat kernels artificially inoculated with two different OTA producing Penicillium verrucosum strains, two different non-toxigenic P. verrucosum strains, and sterile control wheat kernels were subjected to NIR hyperspectral imaging. The acquired three-dimensional data were reshaped into readable two-dimensional data. Principal Component Analysis (PCA) was applied to the two dimensional data to identify the key wavelengths which had greater significance in detecting OTA contamination in wheat. Statistical and histogram features extracted at the key wavelengths were used in the linear, quadratic and Mahalanobis statistical discriminant models to differentiate between sterile control, five concentration levels of OTA contamination in wheat kernels, and five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels. The classification models differentiated sterile control samples from OTA contaminated wheat kernels and non-OTA producing P. verrucosum inoculated wheat kernels with a 100% accuracy. The classification models also differentiated between five concentration levels of OTA contaminated wheat kernels and between five infection levels of non-OTA producing P. verrucosum inoculated wheat kernels with a correct classification of more than 98%. The non-OTA producing P. verrucosum inoculated wheat kernels and OTA contaminated wheat kernels subjected to hyperspectral imaging provided different spectral patterns.

  19. Application of kernel method in fluorescence molecular tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Baikejiang, Reheman; Li, Changqing

    2017-02-01

    Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.

  20. Colonisation of winter wheat grain by Fusarium spp. and mycotoxin content as dependent on a wheat variety, crop rotation, a crop management system and weather conditions.

    PubMed

    Czaban, Janusz; Wróblewska, Barbara; Sułek, Alicja; Mikos, Marzena; Boguszewska, Edyta; Podolska, Grażyna; Nieróbca, Anna

    2015-01-01

    Field experiments were conducted during three consecutive growing seasons (2007/08, 2008/09 and 2009/10) with four winter wheat (Triticum aestivum L.) cultivars - 'Bogatka', 'Kris', 'Satyna' and 'Tonacja' - grown on fields with a three-field crop rotation (winter triticale, spring barley, winter wheat) and in a four-field crop rotation experiment (spring wheat, spring cereals, winter rapeseed, winter wheat). After the harvest, kernels were surface disinfected with 2% NaOCl and then analysed for the internal infection by different species of Fusarium. Fusaria were isolated on Czapek-Dox iprodione dichloran agar medium and identified on the basis of macro- and micro-morphology on potato dextrose agar and synthetic nutrient agar media. The total wheat grain infection by Fusarium depended mainly on relative humidity (RH) and a rainfall during the flowering stage. Intensive rainfall and high RH in 2009 and 2010 in the period meant the proportions of infected kernels by the fungi were much higher than those in 2008 (lack of precipitation during anthesis). Weather conditions during the post-anthesis period changed the species composition of Fusarium communities internally colonising winter wheat grain. The cultivars significantly varied in the proportion of infected kernels by Fusarium spp. The growing season and type of crop rotation had a distinct effect on species composition of Fusarium communities colonising the grain inside. A trend of a higher percentage of the colonised kernels by the fungi in the grain from the systems using more fertilisers and pesticides as well as the buried straw could be perceived. The most frequent species in the grain were F. avenaceum, F. tricinctum and F. poae in 2008, and F. avenaceum, F. graminearum, F. tricinctum and F. poae in 2009 and 2010. The contents of deoxynivalenol and zearalenon in the grain were correlated with the percentage of kernels colonised by F. graminearum and were the highest in 2009 in the grain from the four-field crop rotation. The content of T-2/HT-2 toxins was the highest in 2010 in grain from the three-field crop rotation and it was correlated with the isolation frequency of F. langsethiae.

Top