Science.gov

Sample records for acquisition feature extraction

  1. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  2. Confidence-Based Feature Acquisition

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  3. Adaptive feature extraction expert

    SciTech Connect

    Yuschik, M.

    1983-01-01

    The identification of discriminatory features places an upper bound on the recognition rate of any automatic speech recognition (ASR) system. One way to structure the extraction of features is to construct an expert system which applies a set of rules to identify particular properties of the speech patterns. However, these patterns vary for an individual speaker and from speaker to speaker so that another expert is actually needed to learn the new variations. The author investigates the problem by using sets of discriminatory features that are suggested by a feature generation expert, improves the selectivity of these features with a training expert, and finally develops a minimally spanning feature set with a statistical selection expert. 12 references.

  4. Recursive Feature Extraction in Graphs

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  5. Feature Acquisition with Imbalanced Training Data

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.; Jones, Dayton L.

    2011-01-01

    This work considers cost-sensitive feature acquisition that attempts to classify a candidate datapoint from incomplete information. In this task, an agent acquires features of the datapoint using one or more costly diagnostic tests, and eventually ascribes a classification label. A cost function describes both the penalties for feature acquisition, as well as misclassification errors. A common solution is a Cost Sensitive Decision Tree (CSDT), a branching sequence of tests with features acquired at interior decision points and class assignment at the leaves. CSDT's can incorporate a wide range of diagnostic tests and can reflect arbitrary cost structures. They are particularly useful for online applications due to their low computational overhead. In this innovation, CSDT's are applied to cost-sensitive feature acquisition where the goal is to recognize very rare or unique phenomena in real time. Example applications from this domain include four areas. In stream processing, one seeks unique events in a real time data stream that is too large to store. In fault protection, a system must adapt quickly to react to anticipated errors by triggering repair activities or follow- up diagnostics. With real-time sensor networks, one seeks to classify unique, new events as they occur. With observational sciences, a new generation of instrumentation seeks unique events through online analysis of large observational datasets. This work presents a solution based on transfer learning principles that permits principled CSDT learning while exploiting any prior knowledge of the designer to correct both between-class and withinclass imbalance. Training examples are adaptively reweighted based on a decomposition of the data attributes. The result is a new, nonparametric representation that matches the anticipated attribute distribution for the target events.

  6. Information based universal feature extraction

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  7. Galaxy Classification without Feature Extraction

    NASA Astrophysics Data System (ADS)

    Polsterer, K. L.; Gieseke, F.; Kramer, O.

    2012-09-01

    The automatic classification of galaxies according to the different Hubble types is a widely studied problem in the field of astronomy. The complexity of this task led to projects like Galaxy Zoo which try to obtain labeled data based on visual inspection by humans. Many automatic classification frameworks are based on artificial neural networks (ANN) in combination with a feature extraction step in the pre-processing phase. These approaches rely on labeled catalogs for training the models. The small size of the typically used training sets, however, limits the generalization performance of the resulting models. In this work, we present a straightforward application of support vector machines (SVM) for this type of classification tasks. The conducted experiments indicate that using a sufficient number of labeled objects provided by the EFIGI catalog leads to high-quality models. In contrast to standard approaches no additional feature extraction is required.

  8. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  9. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  10. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

    ... flag value. It is written in Interactive Data Language (IDL) as a callable procedure that receives as an argument a 16-bit ... Flag Extraction routine  (5 KB) Interactive Data Language (IDL) is available from  Exelis Visual Information Solutions . ...

  11. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  12. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  13. Guidance in feature extraction to resolve uncertainty

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris; Kovalerchuk, Michael; Streltsov, Simon; Best, Matthew

    2013-05-01

    Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the "complexity trap". This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.

  14. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  15. ECG Feature Extraction using Time Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Nair, Mahesh A.

    The proposed algorithm is a novel method for the feature extraction of ECG beats based on Wavelet Transforms. A combination of two well-accepted methods, Pan Tompkins algorithm and Wavelet decomposition, this system is implemented with the help of MATLAB. The focus of this work is to implement the algorithm, which can extract the features of ECG beats with high accuracy. The performance of this system is evaluated in a pilot study using the MIT-BIH Arrhythmia database.

  16. Extraction of edge feature in cardiovascular image

    NASA Astrophysics Data System (ADS)

    Lu, Jianrong; Chen, Dongqing; Yu, Daoyin; Liu, Xiaojun

    2001-09-01

    Extraction of edge feature and accurate measurement of vascular diameter in cardiovascular image are the bases for labeling the coronary hierarchy, 3D refined reconstruction of the coronary arterial tree and accurate fusion between the calculated 3D vascular trees and other views. In order to extract vessels from the image, the grayscale minimization of the circle template and differential edge detection are put forward. Edge pixels of the coronary artery are set according to maximization of the differential value. The edge lines are determined after the edge pixels are smoothed by B-Spline function. The assessment of feature extraction is demonstrated by the excellent performance in computer simulation and actual application.

  17. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  18. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  19. Generalized feature extraction using expansion matching.

    PubMed

    Nandy, D; Ben-Arie, J

    1999-01-01

    A novel generalized feature extraction method based on the expansion matching (EXM) method and on the Karhunen-Loeve transform (KLT) is presented. The method provides an efficient way to locate complex features of interest like corners and junctions with reduced number of filtering operations. The EXM method is used to design optimal detectors for a set of model elementary features. The KL representation of these model EXM detectors is used to filter the image and detect candidate interest points from the energy peaks of the eigen coefficients. The KL coefficients at these candidate points are then used to efficiently reconstruct the response and differentiate real junctions and corners from arbitrary features in the image. The method is robust to additive noise and is able to successfully extract, classify, and find the myriad compositions of corner and junction features formed by combinations of two or more edges or lines. This method differs from previous works in several aspects. First, it treats the features not as distinct entities, but as combinations of elementary features. Second, it employs an optimal set of elementary feature detectors based on the EM approach. Third, the method incorporates a significant reduction in computational complexity by representing a large set of EXM filters by a relatively small number of eigen filters derived by the KL transform of the basic EXM filter set. This is a novel application of the KL transform, which is usually employed to represent signals and not impulse responses as in our present work. PMID:18262862

  20. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  1. Adaptive unsupervised slow feature analysis for feature extraction

    NASA Astrophysics Data System (ADS)

    Gu, Xingjian; Liu, Chuancai; Wang, Sheng

    2015-03-01

    Slow feature analysis (SFA) extracts slowly varying features out of the input data and has been successfully applied on pattern recognition. However, SFA heavily relies on the constructed time series when SFA is applied on databases that neither have obvious temporal structure nor have label information. Traditional SFA constructs time series based on k-nearest neighborhood (k-NN) criterion. Specifically, the time series set constructed by k-NN criterion is likely to include noisy time series or lose suitable time series because the parameter k is difficult to determine. To overcome these problems, a method called adaptive unsupervised slow feature analysis (AUSFA) is proposed. First, AUSFA designs an adaptive criterion to generate time series for characterizing submanifold. The constructed time series have two properties: (1) two points of time series lie on the same submanifold and (2) the submanifold of the time series is smooth. Second, AUSFA seeks projections that simultaneously minimize the slowness scatter and maximize the fastness scatter to extract slow discriminant features. Extensive experimental results on three benchmark face databases demonstrate the effectiveness of our proposed method.

  2. Report of subpanel on feature extraction

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The state of knowledge in feature extraction for Earth resource observation systems is reviewed and research tasks are proposed. Issues in the subpixel feature estimation problem are defined as: (1) the identification of image models which adequately describe the data and the sensor it is using; (2) the construction of local feature models based on those image models; and (3) the problem of trying to understand these effects of preprocessing on the entire process. The development of ground control point (GCP) libraries for automated selection presents two concerns. One is the organization of these GCP libraries for rectification problems, i.e., the problems of automatically selecting by computer the specific GCP's for particular registration tasks. Second is the importance of integrating ground control patterns in a data base management system, allowing interface to a large number of sensor image types with an automatic selection system. The development of data validation criteria for the comparison of different extraction techniques is also discussed.

  3. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  4. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  5. Modified kernel-based nonlinear feature extraction.

    SciTech Connect

    Ma, J.; Perkins, S. J.; Theiler, J. P.; Ahalt, S.

    2002-01-01

    Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determined by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.

  6. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  7. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  8. Extracting features to recognize partially occluded objects

    NASA Astrophysics Data System (ADS)

    Koch, Mark W.; Ramamurthy, Arjun

    1991-12-01

    Noisy objects, partially occluded objects, and objects in random positions and orientations cause significant problems for current robotic vision systems. In the past, an association graph has formed the basis for many model based matching methods. However, the association graph has many false nodes due to local and noisy features. Objects having similar local structures create many false arcs in the association graph. The maximal clique recursive and tree search procedures for finding sets of structurally compatible matches have exponential time complexity, due to these false arcs and nodes. This represents a real problem as the number of objects appearing in the scene and the model set size increase. Our approach is similar to randomized string matching algorithms. Points on edges represent the model features. A fingerprint defines a subset of model features that uniquely identify the model. These fingerprints remove the ambiguities and inconsistencies present in the association graph and eliminate the problems of Turney's connected salient features. The vision system chooses the fingerprints at random, preventing a knowledgeable adversary from constructing examples that destroy the advantages of fingerprinting. Fingerprints consist of local model features called point vectors. We have developed a heuristic approach for extracting fingerprints from a set of model objects. A list of connected and unconnected scene edges represent the scene. A Hough transform type approach matches model fingerprints to scene features. Results are given for scenes containing varying amounts of occlusion.

  9. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics. PMID:24818244

  10. Extraction of geographic features using multioperator fusion

    NASA Astrophysics Data System (ADS)

    Dherete, Pierre; Desachy, Jacky

    1998-12-01

    Automatic analysis of remote sensing images faces different problems: context diversity, complexity of information. To simplify identification and to limit the search space, we use extra data and knowledge to help the scene understanding. Diversity and imprecision of information sources generate new problems. The fuzzy logic theory is used to solve the problem of imprecision. Many extraction algorithms are used to provide a more reliable result. Extraction may be performed either globally on the whole image or locally using information of data bases. Each extractor produces a map of certainty factors for a given type of geographic features according to their characteristics: radiometry, color, linear, etc. Maps contain wrong detections due to imperfections of the detectors or non- completeness of generic models. So, we generate a new map using fusion to have a best credibility used to compute a dynamic programming. It finds an optimal path even if the linear feature is partially occluded. But the path is generally erratic due to noise. Then a snake-like technique smooth the path to clean the erratic parts and to tune the level of detail required to represent the geographic features on a map of a given scale. The result is used to update data bases.

  11. Hyperspectral image feature extraction accelerated by GPU

    NASA Astrophysics Data System (ADS)

    Qu, HaiCheng; Zhang, Ye; Lin, Zhouhan; Chen, Hao

    2012-10-01

    PCA (principal components analysis) algorithm is the most basic method of dimension reduction for high-dimensional data1, which plays a significant role in hyperspectral data compression, decorrelation, denoising and feature extraction. With the development of imaging technology, the number of spectral bands in a hyperspectral image is getting larger and larger, and the data cube becomes bigger in these years. As a consequence, operation of dimension reduction is more and more time-consuming nowadays. Fortunately, GPU-based high-performance computing has opened up a novel approach for hyperspectral data processing6. This paper is concerning on the two main processes in hyperspectral image feature extraction: (1) calculation of transformation matrix; (2) transformation in spectrum dimension. These two processes belong to computationally intensive and data-intensive data processing respectively. Through the introduction of GPU parallel computing technology, an algorithm containing PCA transformation based on eigenvalue decomposition 8(EVD) and feature matching identification is implemented, which is aimed to explore the characteristics of the GPU parallel computing and the prospects of GPU application in hyperspectral image processing by analysing thread invoking and speedup of the algorithm. At last, the result of the experiment shows that the algorithm has reached a 12x speedup in total, in which some certain step reaches higher speedups up to 270 times.

  12. ECG feature extraction and disease diagnosis.

    PubMed

    Bhyri, Channappa; Hamde, S T; Waghmare, L M

    2011-01-01

    An important factor to consider when using findings on electrocardiograms for clinical decision making is that the waveforms are influenced by normal physiological and technical factors as well as by pathophysiological factors. In this paper, we propose a method for the feature extraction and heart disease diagnosis using wavelet transform (WT) technique and LabVIEW (Laboratory Virtual Instrument Engineering workbench). LabVIEW signal processing tools are used to denoise the signal before applying the developed algorithm for feature extraction. First, we have developed an algorithm for R-peak detection using Haar wavelet. After 4th level decomposition of the ECG signal, the detailed coefficient is squared and the standard deviation of the squared detailed coefficient is used as the threshold for detection of R-peaks. Second, we have used daubechies (db6) wavelet for the low resolution signals. After cross checking the R-peak location in 4th level, low resolution signal of daubechies wavelet P waves and T waves are detected. Other features of diagnostic importance, mainly heart rate, R-wave width, Q-wave width, T-wave amplitude and duration, ST segment and frontal plane axis are also extracted and scoring pattern is applied for the purpose of heart disease diagnosis. In this study, detection of tachycardia, bradycardia, left ventricular hypertrophy, right ventricular hypertrophy and myocardial infarction have been considered. In this work, CSE ECG data base which contains 5000 samples recorded at a sampling frequency of 500 Hz and the ECG data base created by the S.G.G.S. Institute of Engineering and Technology, Nanded (Maharashtra) have been used. PMID:21770825

  13. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  14. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  15. Concrete Slump Classification using GLCM Feature Extraction

    NASA Astrophysics Data System (ADS)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  16. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  17. Feature extraction for change analysis in SAR time series

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2015-10-01

    In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information

  18. Analysis of MABEL data for feature extraction

    NASA Astrophysics Data System (ADS)

    Magruder, L.; Neuenschwander, A. L.; Wharton, M.

    2011-12-01

    MABEL (Multiple Altimeter Beam Experimental Lidar) is a test bed representation for ICESat-2 with a high repetition rate, low laser pulse energy and photon-counting detection on an airborne platform. MABEL data can be scaled to simulate ICESat-2 data products and is a demonstration proving critical for model validation and algorithm development. The recent MABEL flights over White Sands Missile in New Mexico (WSMR) have provided especially useful insight for the potential processing schemes of this type of data as well as how to extract specific geophysical or passive optical features. Although the MABEL data has not been precisely geolocated to date, approximate geolocations were derived using interpolated GPS data and aircraft attitude. In addition to providing indication of expected signal response over specific types of terrain/targets, the availability of MABEL data has also facilitated preliminary development into new types of noise filtering for photon-counting data products that will contribute to capabilities associated with future capabilities for ICESat-2 data extraction. One particular useful methodology uses a combination of cluster weighting and neighbor-count weighting. For weighting clustered points, each individual point is tagged with an average distance to its neighbors within an established threshold. Histograms of the mean values are created for both a pure noise section and a signal-noise mixture section, and a deconvolution of these histograms gives a normal distribution for the signal. A fitted Gaussian is used to calculate a threshold for the average distances. This removes locally sparse points, so then a regular neighborhood-count filter is used for a larger search radius. It seems to work better with high-noise cases and allows for improved signal recovery without being computationally expensive. One specific MABEL nadir channel ground track provided returns from several distinct ground markers that included multiple mounds, an elevated

  19. Xenbase: Core features, data acquisition, and data processing.

    PubMed

    James-Zorn, Christina; Ponferrada, Virgillio G; Burns, Kevin A; Fortriede, Joshua D; Lotay, Vaneet S; Liu, Yu; Brad Karpinka, J; Karimi, Kamran; Zorn, Aaron M; Vize, Peter D

    2015-08-01

    Xenbase, the Xenopus model organism database (www.xenbase.org), is a cloud-based, web-accessible resource that integrates the diverse genomic and biological data from Xenopus research. Xenopus frogs are one of the major vertebrate animal models used for biomedical research, and Xenbase is the central repository for the enormous amount of data generated using this model tetrapod. The goal of Xenbase is to accelerate discovery by enabling investigators to make novel connections between molecular pathways in Xenopus and human disease. Our relational database and user-friendly interface make these data easy to query and allows investigators to quickly interrogate and link different data types in ways that would otherwise be difficult, time consuming, or impossible. Xenbase also enhances the value of these data through high-quality gene expression curation and data integration, by providing bioinformatics tools optimized for Xenopus experiments, and by linking Xenopus data to other model organisms and to human data. Xenbase draws in data via pipelines that download data, parse the content, and save them into appropriate files and database tables. Furthermore, Xenbase makes these data accessible to the broader biomedical community by continually providing annotated data updates to organizations such as NCBI, UniProtKB, and Ensembl. Here, we describe our bioinformatics, genome-browsing tools, data acquisition and sharing, our community submitted and literature curation pipelines, text-mining support, gene page features, and the curation of gene nomenclature and gene models. PMID:26150211

  20. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  1. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  2. Munitions related feature extraction from LIDAR data.

    SciTech Connect

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniques for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.

  3. Noise, Edge Extraction and Visibility of Features

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.

    2005-01-01

    Noise, whether due to the image-gathering device or some other reason, reduces the visibility of fine features in an image. Several techniques attempt to mitigate the impact of noise by performing a low-pass filtering operation on the acquired data. This is based on the assumption that the uncorrelated noise has high-frequency content and thus will be suppressed by low-pass filtering. A result of this operation is that edges in a noisy image also tend to get blurred, and, in some cases, may get completely lost due to the low-pass filtering. In this paper, we quantitatively assess the impact of noise on fine feature visibility by using computer-generated targets of known spatial detail. Additionally, we develop a new scheme for noise-reduction based on the connectivity of edge-features. The overall impact of this scheme is to reduce overall noise, yet retain the high frequency content that make edge-features sharp.

  4. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA. PMID:18583731

  5. Feature extraction from Doppler ultrasound signals for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-11-01

    This paper presented the assessment of feature extraction methods used in automated diagnosis of arterial diseases. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Different feature extraction methods were used to obtain feature vectors from ophthalmic and internal carotid arterial Doppler signals. In addition to this, the problem of selecting relevant features among the features available for the purpose of classification of Doppler signals was dealt with. Multilayer perceptron neural networks (MLPNNs) with different inputs (feature vectors) were used for diagnosis of ophthalmic and internal carotid arterial diseases. The assessment of feature extraction methods was performed by taking into consideration of performances of the MLPNNs. The performances of the MLPNNs were evaluated by the convergence rates (number of training epochs) and the total classification accuracies. Finally, some conclusions were drawn concerning the efficiency of discrete wavelet transform as a feature extraction method used for the diagnosis of ophthalmic and internal carotid arterial diseases. PMID:16278106

  6. EFFICIENT FEATURE-BASED CONTOUR EXTRACTION.

    SciTech Connect

    Gattiker, J. R.

    2001-01-01

    Extraction of contours in binary images is an important element of object recognition. This paper discusses a more efficient approach to contour representation and generation. This approach defines a bounding polygon as defined by its vertices rather than by all enclosing pixels, which in itself is an effective representation. These corners can be identified by convolution of the image with a 3 x 3 filter. When these corners are organized by their connecting orientation, identified by the convolution, and type, inside or outside, connectivity characteristics can be articulated to highly constrain the task of sorting the vertices into ordered boundary lists. The search for the next bounding polygon vertex is reduced to a one dimensional minimum distance search rather than the standard, more intensive two dimensional nearest Euclidean neighbor search.

  7. Renyi information for extracting features from TFDs

    NASA Astrophysics Data System (ADS)

    Williams, William J.

    2001-11-01

    Introduction of Renyi information to time-frequency analysis occurred in 1991, by Williams et al at SPIE. The Renyi measure provides a single objective indication of the complexity of a signal as reflected in its time-frequency representation. The Gabor logon is the minimum complexity signal and its informational value is zero bits. All other signals exhibit increased Renyi information. Certain time-frequency distributions are information invariant, meaning that their Renyi information does not change under time-shift, frequency shift and scale changes. The Reduced Interference Distributions are information invariant. Thus a given signal within that class will always have the same Renyi result. This can be used to survey large data sequences in order to isolate certain types of signals. One application is to extract instances of such a signal from a streaming RID representation. Examples for temporomandibular joint clicks are provided.

  8. The Effect of Visual Word Features on the Acquisition of Orthographic Knowledge

    ERIC Educational Resources Information Center

    Martens, Vanessa E. G.; de Jong, Peter F.

    2006-01-01

    Research with adults has shown that the distortion of visual word features, and in particular of the multiletter features within words, hampers word recognition. In this study, "CaSe MiXiNg" was employed to examine the effect of disrupting visual word features on the acquisition of orthographic knowledge in children. During the training, 18…

  9. Direct extraction of topographic features from gray scale haracter images

    SciTech Connect

    Seong-Whan Lee; Young Joon Kim

    1994-12-31

    Optical character recognition (OCR) traditionally applies to binary-valued imagery although text is always scanned and stored in gray scale. However, binarization of multivalued image may remove important topological information from characters and introduce noise to character background. In order to avoid this problem, it is indispensable to develop a method which can minimize the information loss due to binarization by extracting features directly from gray scale character images. In this paper, we propose a new method for the direct extraction of topographic features from gray scale character images. By comparing the proposed method with the Wang and Pavlidis`s method we realized that the proposed method enhanced the performance of topographic feature extraction by computing the directions of principal curvature efficiently and prevented the extraction of unnecessary features. We also show that the proposed method is very effective for gray scale skeletonization compared to Levi and Montanari`s method.

  10. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  11. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-08-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance. PMID:26737209

  12. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  13. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  14. Invariant facial feature extraction using biologically inspired strategies

    NASA Astrophysics Data System (ADS)

    Du, Xing; Gong, Weiguo

    2011-12-01

    In this paper, a feature extraction model for face recognition is proposed. This model is constructed by implementing three biologically inspired strategies, namely a hierarchical network, a learning mechanism of the V1 simple cells, and a data-driven attention mechanism. The hierarchical network emulates the functions of the V1 cortex to progressively extract facial features invariant to illumination, expression, slight pose change, and variations caused by local transformation of facial parts. In the network, filters that account for the local structures of the face are derived through the learning mechanism and used for the invariant feature extraction. The attention mechanism computes a saliency map for the face, and enhances the salient regions of the invariant features to further improve the performance. Experiments on the FERET and AR face databases show that the proposed model boosts the recognition accuracy effectively.

  15. Feature extraction from multiple data sources using genetic programming

    NASA Astrophysics Data System (ADS)

    Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.

    2002-08-01

    Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  16. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  17. Fast SIFT design for real-time visual feature extraction.

    PubMed

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz. PMID:23743775

  18. The Second Language Acquisition of Number and Gender in Swahili: A Feature Reassembly Approach

    ERIC Educational Resources Information Center

    Spinner, Patti

    2013-01-01

    Much of the recent discussion surrounding the second language acquisition of morphology has centered on the question of whether learners can acquire new formal features. Lardiere's (2008, 2009) Feature Reassembly approach offers a new direction for research in this area by emphasizing the challenges presented by crosslinguistic differences in…

  19. Some Thoughts on the Contrastive Analysis of Features in Second Language Acquisition

    ERIC Educational Resources Information Center

    Lardiere, Donna

    2009-01-01

    In this article I discuss the selection and assembly of formal features in second language acquisition. Assembling the particular lexical items of a second language (L2) requires that the learner reconfigure features from the way these are represented in the first language (L1) into new formal configurations on possibly quite different types of…

  20. Re-Assembling Formal Features in Second Language Acquisition: Beyond Minimalism

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    2009-01-01

    In this commentary, Lardiere's discussion of features is compared with the use of features in constraint-based theories, and it is argued that constraint-based theories might offer a more elegant account of second language acquisition (SLA). Further evidence is reported to question the accuracy of Chierchia's (1998) Nominal Mapping Parameter.…

  1. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  2. Volumetric feature extraction and visualization of tomographic molecular imaging.

    PubMed

    Bajaj, Chandrajit; Yu, Zeyun; Auer, Manfred

    2003-01-01

    Electron tomography is useful for studying large macromolecular complex within their cellular context. The associate problems include crowding and complexity. Data exploration and 3D visualization of complexes require rendering of tomograms as well as extraction of all features of interest. We present algorithms for fully automatic boundary segmentation and skeletonization, and demonstrate their applications in feature extraction and visualization of cell and molecular tomographic imaging. We also introduce an interactive volumetric exploration and visualization tool (Volume Rover), which encapsulates implementations of the above volumetric image processing algorithms, and additionally uses efficient multi-resolution interactive geometry and volume rendering techniques for interactive visualization. PMID:14643216

  3. Investigation of image feature extraction by a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.; Theiler, James P.; Perkins, Simon J.; Harvey, Neal R.; Szymanski, John J.; Bloch, Jeffrey J.; Mitchell, Melanie

    1999-11-01

    We describe the implementation and performance of a genetic algorithm which generates image feature extraction algorithms for remote sensing applications. We describe our basis set of primitive image operators and present our chromosomal representation of a complete algorithm. Our initial application has been geospatial feature extraction using publicly available multi-spectral aerial-photography data sets. We present the preliminary results of our analysis of the efficiency of the classic genetic operations of crossover and mutation for our application, and discuss our choice of evolutionary control parameters. We exhibit some of our evolved algorithms, and discuss possible avenues for future progress.

  4. Coevolving feature extraction agents for target recognition in SAR images

    NASA Astrophysics Data System (ADS)

    Bhanu, Bir; Krawiec, Krzysztof

    2003-09-01

    This paper describes a novel evolutionary method for automatic induction of target recognition procedures from examples. The learning process starts with training data containing SAR images with labeled targets and consists in coevolving the population of feature extraction agents that cooperate to build an appropriate representation of the input image. Features extracted by a team of cooperating agents are used to induce a machine learning classifier that is responsible for making the final decision of recognizing a target in a SAR image. Each agent (individual) contains feature extraction procedure encoded according to the principles of linear genetic programming (LGP). Like 'plain' genetic programming, in LGP an agent's genome encodes a program that is executed and tested on the set of training images during the fitness calculation. The program is a sequence of calls to the library of parameterized operations, including, but not limited to, global and local image processing operations, elementary feature extraction, and logic and arithmetic operations. Particular calls operate on working variables that enable the program to store intermediate results, and therefore design complex features. This paper contains detailed description of the learning and recognition methodology outlined here. In experimental part, we report and analyze the results obtained when testing the proposed approach for SAR target recognition using MSTAR database.

  5. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    PubMed Central

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  6. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  7. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  8. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  9. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  10. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  11. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases. PMID:18220192

  12. Genetic programming approach to extracting features from remotely sensed imagery

    SciTech Connect

    Theiler, J. P.; Perkins, S. J.; Harvey, N. R.; Szymanski, J. J.; Brumby, Steven P.

    2001-01-01

    Multi-instrument data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problems of spatial co-registration, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. We describe a genetic programming/supervised classifier software system, called Genie, which evolves and combines spatio-spectral image processing tools for remotely sensed imagery. We describe our representation of candidate image processing pipelines, and discuss our set of primitive image operators. Our primary application has been in the field of geospatial feature extraction, including wildfire scars and general land-cover classes, using publicly available multi-spectral imagery (MSI) and hyper-spectral imagery (HSI). Here, we demonstrate our system on Landsat 7 Enhanced Thematic Mapper (ETM+) MSI. We exhibit an evolved pipeline, and discuss its operation and performance.

  13. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  14. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  15. Feature extraction via KPCA for classification of gait patterns.

    PubMed

    Wu, Jianning; Wang, Jue; Liu, Li

    2007-06-01

    Automated recognition of gait pattern change is important in medical diagnostics as well as in the early identification of at-risk gait in the elderly. We evaluated the use of Kernel-based Principal Component Analysis (KPCA) to extract more gait features (i.e., to obtain more significant amounts of information about human movement) and thus to improve the classification of gait patterns. 3D gait data of 24 young and 24 elderly participants were acquired using an OPTOTRAK 3020 motion analysis system during normal walking, and a total of 36 gait spatio-temporal and kinematic variables were extracted from the recorded data. KPCA was used first for nonlinear feature extraction to then evaluate its effect on a subsequent classification in combination with learning algorithms such as support vector machines (SVMs). Cross-validation test results indicated that the proposed technique could allow spreading the information about the gait's kinematic structure into more nonlinear principal components, thus providing additional discriminatory information for the improvement of gait classification performance. The feature extraction ability of KPCA was affected slightly with different kernel functions as polynomial and radial basis function. The combination of KPCA and SVM could identify young-elderly gait patterns with 91% accuracy, resulting in a markedly improved performance compared to the combination of PCA and SVM. These results suggest that nonlinear feature extraction by KPCA improves the classification of young-elderly gait patterns, and holds considerable potential for future applications in direct dimensionality reduction and interpretation of multiple gait signals. PMID:17509708

  16. The Acquisition of Interpretable Features in L2 Spanish: Personal "a"

    ERIC Educational Resources Information Center

    Guijarro-Fuentes, Pedro

    2012-01-01

    This paper examines the acquisition of interpretable features in English second language (L2) learners of Spanish by investigating the personal preposition a in Spanish. The distribution of a in direct object NPs relates to the animacy/specificity of the NP, the animacy/agentivity of the subject, and the semantics of the predicate (Torrego, 1998;…

  17. Features or Parameters: Which One Makes Second Language Acquisition Easier, and More Interesting to Study?

    ERIC Educational Resources Information Center

    Slabakova, Roumyana

    2009-01-01

    While agreeing with Lardiere that the "parameter-resetting" approach to understanding second language acquisition (SLA) needs rethinking, it is suggested that a more construction-based perspective runs the risk of losing deductive and explanatory power. An alternative is to investigate the constraints on feature assembly/re-assembly in second…

  18. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  19. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  20. Event extraction with complex event classification using rich features.

    PubMed

    Miwa, Makoto; Saetre, Rune; Kim, Jin-Dong; Tsujii, Jun'ichi

    2010-02-01

    Biomedical Natural Language Processing (BioNLP) attempts to capture biomedical phenomena from texts by extracting relations between biomedical entities (i.e. proteins and genes). Traditionally, only binary relations have been extracted from large numbers of published papers. Recently, more complex relations (biomolecular events) have also been extracted. Such events may include several entities or other relations. To evaluate the performance of the text mining systems, several shared task challenges have been arranged for the BioNLP community. With a common and consistent task setting, the BioNLP'09 shared task evaluated complex biomolecular events such as binding and regulation.Finding these events automatically is important in order to improve biomedical event extraction systems. In the present paper, we propose an automatic event extraction system, which contains a model for complex events, by solving a classification problem with rich features. The main contributions of the present paper are: (1) the proposal of an effective bio-event detection method using machine learning, (2) provision of a high-performance event extraction system, and (3) the execution of a quantitative error analysis. The proposed complex (binding and regulation) event detector outperforms the best system from the BioNLP'09 shared task challenge. PMID:20183879

  1. Feature extraction on local jet space for texture classification

    NASA Astrophysics Data System (ADS)

    Oliveira, Marcos William da Silva; da Silva, Núbia Rosa; Manzanera, Antoine; Bruno, Odemir Martinez

    2015-12-01

    The proposal of this study is to analyze the texture pattern recognition over the local jet space looking forward to improve the texture characterization. Local jets decompose the image based on partial derivatives allowing the texture feature extraction be exploited in different levels of geometrical structures. Each local jet component evidences a different local pattern, such as, flat regions, directional variations and concavity or convexity. Subsequently, a texture descriptor is used to extract features from 0th, 1st and 2nd-derivative components. Four well-known databases (Brodatz, Vistex, Usptex and Outex) and four texture descriptors (Fourier descriptors, Gabor filters, Local Binary Pattern and Local Binary Pattern Variance) were used to validate the idea, showing in most cases an increase of the success rates.

  2. Optimal feature extraction for segmentation of Diesel spray images.

    PubMed

    Payri, Francisco; Pastor, José V; Palomares, Alberto; Juliá, J Enrique

    2004-04-01

    A one-dimensional simplification, based on optimal feature extraction, of the algorithm based on the likelihood-ratio test method (LRT) for segmentation in colored Diesel spray images is presented. If the pixel values of the Diesel spray and the combustion images are represented in RGB space, in most cases they are distributed in an area with a given so-called privileged direction. It is demonstrated that this direction permits optimal feature extraction for one-dimensional segmentation in the Diesel spray images, and some of its advantages compared with more-conventional one-dimensional simplification methods, including considerably reduced computational cost while accuracy is maintained within more than reasonable limits, are presented. The method has been successfully applied to images of Diesel sprays injected at room temperature as well as to images of sprays with evaporation and combustion. It has proved to be valid for several cameras and experimental arrangements. PMID:15074419

  3. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  4. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  5. Extracting BI-RADS Features from Portuguese Clinical Texts

    PubMed Central

    Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês

    2013-01-01

    In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method. PMID:23797461

  6. Eddy current pulsed phase thermography and feature extraction

    NASA Astrophysics Data System (ADS)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  7. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure. PMID:24197278

  8. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  9. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. PMID:27052618

  10. A flexible data-driven comorbidity feature extraction framework.

    PubMed

    Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-06-01

    Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. PMID:27127895

  11. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  12. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    PubMed Central

    Galván-Tejada, Carlos E.; García-Vázquez, Juan Pablo; Brena, Ramon F.

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  13. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  14. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  15. The optimal extraction of feature algorithm based on KAZE

    NASA Astrophysics Data System (ADS)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  16. On the Contrastive Analysis of Features in Second Language Acquisition: Uninterpretable Gender on Past Participles in English-French Processing

    ERIC Educational Resources Information Center

    Dekydtspotter, Laurent; Renaud, Claire

    2009-01-01

    Lardiere's discussion raises important questions about the use of features in second language (L2) acquisition. This response examines predictions for processing of a feature-valuing model vs. a frequency-sensitive, associative model in explaining the acquisition of French past participle agreement. Results from a reading-time experiment support…

  17. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  18. Feature extraction from mammographic images using fast marching methods

    NASA Astrophysics Data System (ADS)

    Bottigli, U.; Golosio, B.

    2002-07-01

    Features extraction from medical images represents a fundamental step for shape recognition and diagnostic support. The present work faces the problem of the detection of large features, such as massive lesions and organ contours, from mammographic images. The regions of interest are often characterized by an average grayness intensity that is different from the surrounding. In most cases, however, the desired features cannot be extracted by simple gray level thresholding, because of image noise and non-uniform density of the surrounding tissue. In this work, edge detection is achieved through the fast marching method (Level Set Methods and Fast Marching Methods, Cambridge University Press, Cambridge, 1999), which is based on the theory of interface evolution. Starting from a seed point in the shape of interest, a front is generated which evolves according to an appropriate speed function. Such function is expressed in terms of geometric properties of the evolving interface and of image properties, and should become zero when the front reaches the desired boundary. Some examples of application of such method to mammographic images from the CALMA database (Nucl. Instr. and Meth. A 460 (2001) 107) are presented here and discussed.

  19. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  20. Linear unmixing of hyperspectral signals via wavelet feature extraction

    NASA Astrophysics Data System (ADS)

    Li, Jiang

    A pixel in remotely sensed hyperspectral imagery is typically a mixture of multiple electromagnetic radiances from various ground cover materials. Spectral unmixing is a quantitative analysis procedure used to recognize constituent ground cover materials (or endmembers) and obtain their mixing proportions (or abundances) from a mixed pixel. The abundances are typically estimated using the least squares estimation (LSE) method based on the linear mixture model (LMM). This dissertation provides a complete investigation on how the use of appropriate features can improve the LSE of endmember abundances using remotely sensed hyperspectral signals. The dissertation shows how features based on signal classification approaches, such as discrete wavelet transform (DWT), outperform features based on conventional signal representation methods for dimensionality reduction, such as principal component analysis (PCA), for the LSE of endmember abundances. Both experimental and theoretical analyses are reported in the dissertation. A DWT-based linear unmixing system is designed specially for the abundance estimation. The system utilizes the DWT as a pre-processing step for the feature extraction. Based on DWT-based features, the system utilizes the constrained LSE for the abundance estimation. Experimental results show that the use of DWT-based features reduces the abundance estimation deviation by 30--50% on average, as compared to the use of original hyperspectral signals or conventional PCA-based features. Based on the LMM and the LSE method, a series of theoretical analyses are derived to reveal the fundamental reasons why the use of the appropriate features, such as DWT-based features, can improve the LSE of endmember abundances. Under reasonable assumptions, the dissertation derives a generalized mathematical relationship between the abundance estimation error and the endmember separabilty. It is proven that the abundance estimation error can be reduced through increasing

  1. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method. PMID:25868233

  2. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  3. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  4. Extract relevant features from DEM for groundwater potential mapping

    NASA Astrophysics Data System (ADS)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  5. A DFT-Based Method of Feature Extraction for Palmprint Recognition

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru

    Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.

  6. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  7. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  8. Feature extraction and dimensionality reduction for mass spectrometry data.

    PubMed

    Liu, Yihui

    2009-09-01

    Mass spectrometry is being used to generate protein profiles from human serum, and proteomic data obtained from mass spectrometry have attracted great interest for the detection of early stage cancer. However, high dimensional mass spectrometry data cause considerable challenges. In this paper we propose a feature extraction algorithm based on wavelet analysis for high dimensional mass spectrometry data. A set of wavelet detail coefficients at different scale is used to detect the transient changes of mass spectrometry data. The experiments are performed on 2 datasets. A highly competitive accuracy, compared with the best performance of other kinds of classification models, is achieved. Experimental results show that the wavelet detail coefficients are efficient way to characterize features of high dimensional mass spectra and reduce the dimensionality of high dimensional mass spectra. PMID:19646687

  9. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  10. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  11. Wavelet based feature extraction and visualization in hyperspectral tissue characterization

    PubMed Central

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-01-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437

  12. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  13. Detailed hydrographic feature extraction from high-resolution LIDAR data

    NASA Astrophysics Data System (ADS)

    Anderson, Danny L.

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed "mDn", is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mD n method yielded the smallest LRMSE.

  14. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  15. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    PubMed Central

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  16. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  17. Automatic feature extraction from micrographs of forged superalloys

    NASA Astrophysics Data System (ADS)

    Berhuber, E.; Rinnhofer, A.; Stockinger, M.; Benesova, W.; Jakob, G.

    2008-07-01

    The manual determination of metallurgical parameters of forged superalloys can be dramatically improved by automatic, image-processing-based feature extraction. With the proposed methods, the typical errors during grain size estimation for Inconel 718 and Allvac 718Plus ™ , caused by twins and other artifacts like scratches, can be eliminated. Different processing strategies for grain size estimation allow the application of a wide range of ASTM grain size numbers from G3 to G12 with the typical variations in the manifestation of metallurgical details and the magnification-related limitations of image quality. Intercept counting strategies show advantages for samples with pronounced anisotropy and can produce detailed statistics on grain orientation. In addition to a single grain size number, grain size histograms offer a more precise description of the material properties.

  18. Crown Features Extraction from Low Altitude AVIRIS Data

    NASA Astrophysics Data System (ADS)

    Ogunjemiyo, S. O.; Roberts, D.; Ustin, S.

    2005-12-01

    Automated tree recognition and crown delineations are computer-assisted procedures for identifying individual trees and segmenting their crown boundaries on digital imagery. The success of the procedures is dependent on the quality of the image data and the physiognomy of the stand as evidence by previous studies, which have all used data with spatial resolution less than 1 m and average crown diameter to pixel size ratio greater than 4. In this study we explored the prospect of identifying individual tree species and extracting crown features from low altitude AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) data with spatial resolution of 4 m. The test site is a Douglas-fir and Western hemlock dominated old-growth conifer forest in the Pacific Northwest with average crown diameter of 12 m, which translates to a crown diameter pixel ratio less than 4 m; the lowest value ever used in similar studies. The analysis was carried out using AVIRIS reflectance imagery in the NIR band centered at 885 nm wavelength. The analysis required spatial filtering of the reflectance imagery followed by application of a tree identification algorithm based on maximum filter technique. For every identified tree location a crown polygon was delineated by applying crown segmentation algorithm. Each polygon boundary was characterized by a loop connecting pixels that were geometrically determined to define the crown boundary. Crown features were extracted based on the area covered by the polygons, and they include crown diameters, average distance between crowns, species spectral, pixel brightness at the identified tree locations, average brightness of pixels enclosed by the crown boundary and within crown variation in pixel brightness. Comparison of the results with ground reference data showed a high correlation between the two datasets and highlights the potential of low altitude AVIRIS data to provide the means to improve forest management and practices and estimates of critical

  19. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    NASA Technical Reports Server (NTRS)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  20. STATISTICAL BASED NON-LINEAR MODEL UPDATING USING FEATURE EXTRACTION

    SciTech Connect

    Schultz, J.F.; Hemez, F.M.

    2000-10-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the classical linear updating paradigm of utilizing the eigen-parameters or FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model, Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric, Finally an investigation of the model to replicate the measured response variation is examined.

  1. Evolving spatio-spectral feature extraction algorithms for hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.; Galbraith, Amy E.

    2002-11-01

    Hyperspectral imagery data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problem of dealing with the sheer amount of spectral information per pixel in a hyperspectral image, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. Rather than carry out this algorithm exploration by hand, we are interested in developing learning systems that can evolve these algorithms. We describe a genetic programming/supervised classifier software system, called GENIE, which evolves image processing tools for remotely sensed imagery. Our primary application has been land-cover classification from satellite imagery. GENIE was developed to evolve classification algorithms for multispectral imagery, and the extension to hyperspectral imagery presents a chance to test a genetic programming system by greatly increasing the complexity of the data under analysis, as well as a chance to find interesting spatio-spectral algorithms for hyperspectral imagery. We demonstrate our system on publicly available imagery from the new Hyperion imaging spectrometer onboard the NASA Earth Observing-1 (EO-1) satellite.

  2. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements. PMID:25529700

  3. Extraction of Molecular Features through Exome to Transcriptome Alignment.

    PubMed

    Mudvari, Prakriti; Kowsari, Kamran; Cole, Charles; Mazumder, Raja; Horvath, Anelia

    2013-08-22

    Integrative Next Generation Sequencing (NGS) DNA and RNA analyses have very recently become feasible, and the published to date studies have discovered critical disease implicated pathways, and diagnostic and therapeutic targets. A growing number of exomes, genomes and transcriptomes from the same individual are quickly accumulating, providing unique venues for mechanistic and regulatory features analysis, and, at the same time, requiring new exploration strategies. In this study, we have integrated variation and expression information of four NGS datasets from the same individual: normal and tumor breast exomes and transcriptomes. Focusing on SNPcentered variant allelic prevalence, we illustrate analytical algorithms that can be applied to extract or validate potential regulatory elements, such as expression or growth advantage, imprinting, loss of heterozygosity (LOH), somatic changes, and RNA editing. In addition, we point to some critical elements that might bias the output and recommend alternative measures to maximize the confidence of findings. The need for such strategies is especially recognized within the growing appreciation of the concept of systems biology: integrative exploration of genome and transcriptome features reveal mechanistic and regulatory insights that reach far beyond linear addition of the individual datasets. PMID:24791251

  4. Extraction of Molecular Features through Exome to Transcriptome Alignment

    PubMed Central

    Mudvari, Prakriti; Kowsari, Kamran; Cole, Charles; Mazumder, Raja; Horvath, Anelia

    2014-01-01

    Integrative Next Generation Sequencing (NGS) DNA and RNA analyses have very recently become feasible, and the published to date studies have discovered critical disease implicated pathways, and diagnostic and therapeutic targets. A growing number of exomes, genomes and transcriptomes from the same individual are quickly accumulating, providing unique venues for mechanistic and regulatory features analysis, and, at the same time, requiring new exploration strategies. In this study, we have integrated variation and expression information of four NGS datasets from the same individual: normal and tumor breast exomes and transcriptomes. Focusing on SNPcentered variant allelic prevalence, we illustrate analytical algorithms that can be applied to extract or validate potential regulatory elements, such as expression or growth advantage, imprinting, loss of heterozygosity (LOH), somatic changes, and RNA editing. In addition, we point to some critical elements that might bias the output and recommend alternative measures to maximize the confidence of findings. The need for such strategies is especially recognized within the growing appreciation of the concept of systems biology: integrative exploration of genome and transcriptome features reveal mechanistic and regulatory insights that reach far beyond linear addition of the individual datasets. PMID:24791251

  5. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  6. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  7. Historical feature pattern extraction based network attack situation sensing algorithm.

    PubMed

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously. PMID:24892054

  8. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  9. Sparse Feature Extraction for Pose-Tolerant Face Recognition.

    PubMed

    Abiantun, Ramzi; Prabhu, Utsav; Savvides, Marios

    2014-10-01

    Automatic face recognition performance has been steadily improving over years of research, however it remains significantly affected by a number of factors such as illumination, pose, expression, resolution and other factors that can impact matching scores. The focus of this paper is the pose problem which remains largely overlooked in most real-world applications. Specifically, we focus on one-to-one matching scenarios where a query face image of a random pose is matched against a set of gallery images. We propose a method that relies on two fundamental components: (a) A 3D modeling step to geometrically correct the viewpoint of the face. For this purpose, we extend a recent technique for efficient synthesis of 3D face models called 3D Generic Elastic Model. (b) A sparse feature extraction step using subspace modeling and ℓ1-minimization to induce pose-tolerance in coefficient space. This in return enables the synthesis of an equivalent frontal-looking face, which can be used towards recognition. We show significant performance improvements in verification rates compared to commercial matchers, and also demonstrate the resilience of the proposed method with respect to degrading input quality. We find that the proposed technique is able to match non-frontal images to other non-frontal images of varying angles. PMID:26352635

  10. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  11. Feature extraction and models for speech: An overview

    NASA Astrophysics Data System (ADS)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  12. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  13. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  14. Testing the Self-Similarity Exponent to Feature Extraction in Motor Imagery Based Brain Computer Interface Systems

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Sánchez-Granero, Miguel Ángel; García-Laencina, Pedro J.; Fernández-Martínez, Manuel; Serna, José; Roca-Dorda, Joaquín

    2015-12-01

    A Brain Computer Interface (BCI) system is a tool not requiring any muscle action to transmit information. Acquisition, preprocessing, feature extraction (FE), and classification of electroencephalograph (EEG) signals constitute the main steps of a motor imagery BCI. Among them, FE becomes crucial for BCI, since the underlying EEG knowledge must be properly extracted into a feature vector. Linear approaches have been widely applied to FE in BCI, whereas nonlinear tools are not so common in literature. Thus, the main goal of this paper is to check whether some Hurst exponent and fractal dimension based estimators become valid indicators to FE in motor imagery BCI. The final results obtained were not optimal as expected, which may be due to the fact that the nature of the analyzed EEG signals in these motor imagery tasks were not self-similar enough.

  15. Feature Extraction Of Retinal Images Interfaced With A Rule-Based Expert System

    NASA Astrophysics Data System (ADS)

    Ishag, Na seem; Connell, Kevin; Bolton, John

    1988-12-01

    Feature vectors are automatically extracted from a library of digital retinal images after considerable image processing. Main features extracted are location of optic disc, cup-to-disc ratio using Hough transform techniques and histogram and binary enhancement algorithms, and blood vessel locations. These feature vectors are used to form a relational data base of the images. Relational operations are then used to extract pertinent information from the data base to form replies to queries from the rule-based expert system.

  16. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  17. Feature edge extraction from 3D triangular meshes using a thinning algorithm

    NASA Astrophysics Data System (ADS)

    Nomura, Masaru; Hamada, Nozomu

    2001-11-01

    Highly detailed geometric models, which are represented as dense triangular meshes are becoming popular in computer graphics. Since such 3D meshes often have huge information, we require some methods to treat them efficiently in the 3D mesh processing such as, surface simplification, subdivision surface, curved surface approximation and morphing. In these applications, we often extract features of 3D meshes such as feature vertices and feature edges in preprocessing step. An automatic extraction method of feature edges is treated in this study. In order to realize the feature edge extraction method, we first introduce the concavity and convexity evaluation value. Then the histogram of the concavity and convexity evaluation value is used to separate the feature edge region. We apply a thinning algorithm, which is used in 2D binary image processing. It is shown that the proposed method can extract appropriate feature edges from 3D meshes.

  18. Further Thoughts on Parameters and Features in Second Language Acquisition: A Reply to Peer Comments on Lardiere's "Some Thoughts on the Contrastive Analysis of Features in Second Language Acquisition" in SLR 25(2)

    ERIC Educational Resources Information Center

    Lardiere, Donna

    2009-01-01

    In this article, Lardiere responds to peer comments regarding her earlier article "Some Thoughts on the Contrastive Analysis of Features in Second Language Acquisition" (EJ831786). Lardiere acknowledges the reviewers' thoughtful contributions and expert expansion on various facets of the original article. While she states that it is clear from the…

  19. Extraction of facial features as indicators of stress and anxiety.

    PubMed

    Pediaditis, M; Giannakakis, G; Chiarugi, F; Manousos, D; Pampouchidou, A; Christinaki, E; Iatraki, G; Kazantzaki, E; Simos, P G; Marias, K; Tsiknakis, M

    2015-08-01

    Stress and anxiety heavily affect the human wellbeing and health. Under chronic stress, the human body and mind suffers by constantly mobilizing all of its resources for defense. Such a stress response can also be caused by anxiety. Moreover, excessive worrying and high anxiety can lead to depression and even suicidal thoughts. The typical tools for assessing these psycho-somatic states are questionnaires, but due to their shortcomings, by being subjective and prone to bias, new more robust methods based on facial expression analysis have emerged. Going beyond the typical detection of 6 basic emotions, this study aims to elaborate a set of facial features for the detection of stress and/or anxiety. It employs multiple methods that target each facial region individually. The features are selected and the classification performance is measured based on a dataset consisting 23 subjects. The results showed that with feature sets of 9 and 10 features an overall accuracy of 73% is reached. PMID:26737099

  20. Edge features extraction from 3D laser point cloud based on corresponding images

    NASA Astrophysics Data System (ADS)

    Li, Xin-feng; Zhao, Zi-ming; Xu, Guo-qing; Geng, Yan-long

    2013-09-01

    An extraction method of edge features from 3D laser point cloud based on corresponding images was proposed. After the registration of point cloud and corresponding image, the sub-pixel edge can be extracted from the image using gray moment algorithm. Then project the sub-pixel edge to the point cloud in fitting scan-lines. At last the edge features were achieved by linking the crossing points. The experimental results demonstrate that the method guarantees accurate fine extraction.

  1. Features of the Most Interesting and the Least Interesting Postgraduate Second Language Acquisition Lectures Offered by Three Lecturers

    ERIC Educational Resources Information Center

    Tin, Tan Bee

    2009-01-01

    The paper discusses the various situational features and linguistic devices reflected in the three most interesting and the three least interesting postgraduate second language acquisition lectures taught by three lecturers. Students attending the classes were invited to record their interest level at regular intervals throughout the session. For…

  2. Combination of heterogeneous EEG feature extraction methods and stacked sequential learning for sleep stage classification.

    PubMed

    Herrera, L J; Fernandes, C M; Mora, A M; Migotina, D; Largo, R; Guillen, A; Rosa, A C

    2013-06-01

    This work proposes a methodology for sleep stage classification based on two main approaches: the combination of features extracted from electroencephalogram (EEG) signal by different extraction methods, and the use of stacked sequential learning to incorporate predicted information from nearby sleep stages in the final classifier. The feature extraction methods used in this work include three representative ways of extracting information from EEG signals: Hjorth features, wavelet transformation and symbolic representation. Feature selection was then used to evaluate the relevance of individual features from this set of methods. Stacked sequential learning uses a second-layer classifier to improve the classification by using previous and posterior first-layer predicted stages as additional features providing information to the model. Results show that both approaches enhance the sleep stage classification accuracy rate, thus leading to a closer approximation to the experts' opinion. PMID:23627659

  3. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  4. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  5. PROCESSING OF SCANNED IMAGERY FOR CARTOGRAPHIC FEATURE EXTRACTION.

    USGS Publications Warehouse

    Benjamin, Susan P.; Gaydos, Leonard

    1984-01-01

    Digital cartographic data are usually captured by manually digitizing a map or an interpreted photograph or by automatically scanning a map. Both techniques first require manual photointerpretation to describe features of interest. A new approach, bypassing the laborious photointerpretation phase, is being explored using direct digital image analysis. Aerial photographs are scanned and color separated to create raster data. These are then enhanced and classified using several techniques to identify roads and buildings. Finally, the raster representation of these features is refined and vectorized. 11 refs.

  6. Extraction of terrain features from digital elevation models

    USGS Publications Warehouse

    Price, Curtis V.; Wolock, David M.; Ayers, Mark A.

    1989-01-01

    Digital elevation models (DEMs) are being used to determine variable inputs for hydrologic models in the Delaware River basin. Recently developed software for analysis of DEMs has been applied to watershed and streamline delineation. The results compare favorably with similar delineations taken from topographic maps. Additionally, output from this software has been used to extract other hydrologic information from the DEM, including flow direction, channel location, and an index describing the slope and shape of a watershed.

  7. Forest classification using extracted PolSAR features from Compact Polarimetry data

    NASA Astrophysics Data System (ADS)

    Aghabalaei, Amir; Maghsoudi, Yasser; Ebadi, Hamid

    2016-05-01

    This study investigates the ability of extracted Polarimetric Synthetic Aperture RADAR (PolSAR) features from Compact Polarimetry (CP) data for forest classification. The CP is a new mode that is recently proposed in Dual Polarimetry (DP) imaging system. It has several important advantages in comparison with Full Polarimetry (FP) mode such as reduction ability in complexity, cost, mass, data rate of a SAR system. Two strategies are employed for PolSAR feature extraction. In first strategy, the features are extracted using 2 × 2 covariance matrices of CP modes simulated by RADARSAT-2 C-band FP mode. In second strategy, they are extracted using 3 × 3 covariance matrices reconstructed from the CP modes called Pseudo Quad (PQ) modes. In each strategy, the extracted PolSAR features are combined and optimal features are selected by Genetic Algorithm (GA) and then a Support Vector Machine (SVM) classifier is applied. Finally, the results are compared with the FP mode. Results of this study show that the PolSAR features extracted from π / 4 CP mode, as well as combining the PolSAR features extracted from CP or PQ modes provide a better overall accuracy in classification of forest.

  8. Feature Extraction of Motion from Time-series Data by using Attractors

    NASA Astrophysics Data System (ADS)

    Akiduki, Takuma; Zhang, Zhong; Imamura, Takashi; Miyake, Tetsuo

    In this paper, a new method of motion analysis using attractors in nonlinear dynamical systems is discussed. The attractor is defined as a set of spatially-expanded trajectories of time-series data of a human motion in a state space. Using the attractor representation in the state space, a method of feature extraction from time-series data of human motions is proposed. The time-series data of human motions are captured by wearable inertial sensors. First, a design method of a dynamical system, which encodes time-series data of motions in attractors, is introduced. Next, an example of feature extraction using our approach is demonstrated for a simple upper limb movement. Finally, the physical meaning of the extracted features is discussed. As a result, the extracted features by attractors can describe the characteristics of the human motion, such as posture and quickness, effectively in the spatiotemporal continuity feature space.

  9. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  10. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  11. Hand veins feature extraction using DT-CNNS

    NASA Astrophysics Data System (ADS)

    Malki, Suleyman; Spaanenburg, Lambert

    2007-05-01

    As the identification process is based on the unique patterns of the users, biometrics technologies are expected to provide highly secure authentication systems. The existing systems using fingerprints or retina patterns are, however, very vulnerable. One's fingerprints are accessible as soon as the person touches a surface, while a high resolution camera easily captures the retina pattern. Thus, both patterns can easily be "stolen" and forged. Beside, technical considerations decrease the usability for these methods. Due to the direct contact with the finger, the sensor gets dirty, which decreases the authentication success ratio. Aligning the eye with a camera to capture the retina pattern gives uncomfortable feeling. On the other hand, vein patterns of either a palm of the hand or a single finger offer stable, unique and repeatable biometrics features. A fingerprint-based identification system using Cellular Neural Networks has already been proposed by Gao. His system covers all stages of a typical fingerprint verification procedure from Image Preprocessing to Feature Matching. This paper performs a critical review of the individual algorithmic steps. Notably, the operation of False Feature Elimination is applied only once instead of 3 times. Furthermore, the number of iterations is limited to 1 for all used templates. Hence, the computational need of the feedback contribution is removed. Consequently the computational effort is drastically reduced without a notable chance in quality. This allows a full integration of the detection mechanism. The system is prototyped on a Xilinx Virtex II Pro P30 FPGA.

  12. The fuzzy Hough Transform-feature extraction in medical images

    SciTech Connect

    Philip, K.P.; Dove, E.L.; Stanford, W.; Chandran, K.B. ); McPherson, D.D.; Gotteiner, N.L. . Dept. of Internal Medicine)

    1994-06-01

    Identification of anatomical features is a necessary step for medical image analysis. Automatic methods for feature identification using conventional pattern recognition techniques typically classify an object as a member of a predefined class of objects, but do not attempt to recover the exact or approximate shape of that object. For this reason, such techniques are usually not sufficient to identify the borders of organs when individual geometry varies in local detail, even though the general geometrical shape is similar. The authors present an algorithm that detects features in an image based on approximate geometrical models. The algorithm is based on the traditional and generalized Hough Transforms but includes notions from fuzzy set theory. The authors use the new algorithm to roughly estimate the actual locations of boundaries of an internal organ, and from this estimate, to determine a region of interest around the organ. Based on this rough estimate of the border location, and the derived region of interest, the authors find the final estimate of the true borders with other image processing techniques. The authors present results that demonstrate that the algorithm was successfully used to estimate the approximate location of the chest wall in humans, and of the left ventricular contours of a dog heart obtained from cine-computed tomographic images. The authors use this fuzzy Hough Transform algorithm as part of a larger procedures to automatically identify the myocardial contours of the heart. This algorithm may also allow for more rapid image processing and clinical decision making in other medical imaging applications.

  13. Semantic control of feature extraction from natural scenes.

    PubMed

    Neri, Peter

    2014-02-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  14. A Model for Extracting Personal Features of an Electroencephalogram and Its Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Fukumi, Minoru

    This paper introduces a model for extracting features of an electroencephalogram (EEG) and a method for evaluating the model. In general, it is known that an EEG contains personal features. However, extraction of these personal features has not been reported. The analyzed frequency components of an EEG can be classified as the components that contain significant number of features and the ones that do not contain any. From the viewpoint of these feature differences, we propose the model for extracting features of the EEG. The model assumes a latent structure and employs factor analysis by considering the model error as personal error. We consider the EEG feature as a first factor loading, which is calculated by eigenvalue decomposition. Furthermore, we use a k-nearest neighbor (kNN) algorithm for evaluating the proposed model and extracted EEG features. In general, the distance metric used is Euclidean distance. We believe that the distance metric used depends on the characteristic of the extracted EEG feature and on the subject. Therefore, depending on the subject, we use one of the three distance metrics: Euclidean distance, cosine distance, and correlation coefficient. Finally, in order to show the effectiveness of the proposed model, we perform a computer simulation using real EEG data.

  15. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  16. Embedded prediction in feature extraction: application to single-trial EEG discrimination.

    PubMed

    Hsu, Wei-Yen

    2013-01-01

    In this study, an analysis system embedding neuron-fuzzy prediction in feature extraction is proposed for brain-computer interface (BCI) applications. Wavelet-fractal features combined with neuro-fuzzy predictions are applied for feature extraction in motor imagery (MI) discrimination. The features are extracted from the electroencephalography (EEG) signals recorded from participants performing left and right MI. Time-series predictions are performed by training 2 adaptive neuro-fuzzy inference systems (ANFIS) for respective left and right MI data. Features are then calculated from the difference in multi-resolution fractal feature vector (MFFV) between the predicted and actual signals through a window of EEG signals. Finally, the support vector machine is used for classification. The proposed method estimates its performance in comparison with the linear adaptive autoregressive (AAR) model and the AAR time-series prediction of 6 participants from 2 data sets. The results indicate that the proposed method is promising in MI classification. PMID:23248335

  17. Geometric feature extraction by a multimarked point process.

    PubMed

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency. PMID:20634555

  18. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  19. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data.

    PubMed

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew's Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  20. Nicotine trained as a negative feature passes the retardation-of-acquisition and summation tests of a conditioned inhibitor

    PubMed Central

    Murray, Jennifer E.; Walker, Andrew W.; Li, Chia; Wells, Nicole R.; Penrod, Rachel D.; Bevins, Rick A.

    2011-01-01

    Nicotine functions as a negative feature in a Pavlovian discriminated goal-tracking task. Whether withholding of responding to the conditional stimulus (CS) reflects nicotine functioning as a conditioned inhibitor is unknown. Accordingly, the present research sought to determine whether nicotine trained as a negative feature passed the retardation-of-acquisition and summation tests, thus characterizing it as a pharmacological (interoceptive) conditioned inhibitor. In the retardation test, rats received either nicotine (0.4 mg/kg) or chlordiazepoxide (5 mg/kg) negative feature training in which the drug state signaled when a 15-sec light CS would not be paired with sucrose; light was paired with sucrose on intermixed saline sessions. Following acquisition of the discrimination, both groups received nicotine CS training in which sucrose was intermittently available on nicotine but not intermixed saline sessions. Acquisition of conditioned responding to the nicotine CS was slower in the nicotine negative feature group than in the chlordiazepoxide negative feature group. In the summation test, rats were assigned to either the nicotine negative feature group or a pseudoconditioning control. In this control, the light CS was paired with sucrose on half the nicotine and half the saline sessions. Both groups also received excitatory training in which a white noise CS was paired with sucrose. The summation test consisted of presenting the white noise in conjunction with nicotine. Conditioned responding evoked by the white noise was decreased in the negative feature but not the pseudoconditioning group. Combined, the results provide the first evidence that an interoceptive stimulus (nicotine) can become a conditioned inhibitor. PMID:21693633

  1. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  2. Singular value decomposition based feature extraction approaches for classifying faults of induction motors

    NASA Astrophysics Data System (ADS)

    Kang, Myeongsu; Kim, Jong-Myon

    2013-12-01

    This paper proposes singular value decomposition (SVD)-based feature extraction methods for fault classification of an induction motor: a short-time energy (STE) plus SVD technique in the time-domain analysis, and a discrete cosine transform (DCT) plus SVD technique in the frequency-domain analysis. To early identify induction motor faults, the extracted features are utilized as the inputs of multi-layer support vector machines (MLSVMs). Since SVMs perform well with the radial basis function (RBF) kernel for appropriately categorizing the faults of the induction motor, it is important to explore the impact of the σ values for the RBF kernel, which affects the classification accuracy. Likewise, this paper quantitatively evaluates the classification accuracy with different numbers of features, because the number of features affects the classification accuracy. According to the experimental results, although SVD-based features are effective for a noiseless environment, the STE plus SVD feature extraction approach is more effective with and without sensor noise in terms of the classification accuracy than the DCT plus SVD feature extraction approach. To demonstrate the improved classification of the proposed approach for identifying faults of the induction motor, the proposed SVD based feature extraction approach is compared with other state-of-the art methods and yields higher classification accuracies for both noiseless and noisy environments than conventional approaches.

  3. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  4. Automated Development of Feature Extraction Tools for Planetary Science Image Datasets

    NASA Astrophysics Data System (ADS)

    Plesko, C.; Brumby, S.; Asphaug, E.

    2003-03-01

    We explore development of feature extraction algorithms for Mars Orbiter Camera narrow angle data using GENIE machine learning software. The algorithms are successful at detecting craters within the images, and generalize well to a new image.

  5. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  6. Micromotion feature extraction of radar target using tracking pulses with adaptive pulse repetition frequency adjustment

    NASA Astrophysics Data System (ADS)

    Chen, Yijun; Zhang, Qun; Ma, Changzheng; Luo, Ying; Yeo, Tat Soon

    2014-01-01

    In multifunction phased array radar systems, different activities (e.g., tracking, searching, imaging, feature extraction, recognition, etc.) would need to be performed simultaneously. To relieve the conflict of the radar resource distribution, a micromotion feature extraction method using tracking pulses with adaptive pulse repetition frequencies (PRFs) is proposed in this paper. In this method, the idea of a varying PRF is utilized to solve the frequency-domain aliasing problem of the micro-Doppler signal. With appropriate atom set construction, the micromotion feature can be extracted and the image of the target can be obtained based on the Orthogonal Matching Pursuit algorithm. In our algorithm, the micromotion feature of a radar target is extracted from the tracking pulses and the quality of the constructed image is fed back into the radar system to adaptively adjust the PRF of the tracking pulses. Finally, simulation results illustrate the effectiveness of the proposed method.

  7. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  8. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2015-12-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  9. Distribution Driven Extraction and Tracking of Features for Time-varying Data Analysis.

    PubMed

    Dutta, Soumya; Shen, Han-Wei

    2016-01-01

    Effective analysis of features in time-varying data is essential in numerous scientific applications. Feature extraction and tracking are two important tasks scientists rely upon to get insights about the dynamic nature of the large scale time-varying data. However, often the complexity of the scientific phenomena only allows scientists to vaguely define their feature of interest. Furthermore, such features can have varying motion patterns and dynamic evolution over time. As a result, automatic extraction and tracking of features becomes a non-trivial task. In this work, we investigate these issues and propose a distribution driven approach which allows us to construct novel algorithms for reliable feature extraction and tracking with high confidence in the absence of accurate feature definition. We exploit two key properties of an object, motion and similarity to the target feature, and fuse the information gained from them to generate a robust feature-aware classification field at every time step. Tracking of features is done using such classified fields which enhances the accuracy and robustness of the proposed algorithm. The efficacy of our method is demonstrated by successfully applying it on several scientific data sets containing a wide range of dynamic time-varying features. PMID:26529731

  10. A Novel Framework for Extracting Visual Feature-Based Keyword Relationships from an Image Database

    NASA Astrophysics Data System (ADS)

    Katsurai, Marie; Ogawa, Takahiro; Haseyama, Miki

    In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.

  11. [Image Feature Extraction and Discriminant Analysis of Xinjiang Uygur Medicine Based on Color Histogram].

    PubMed

    Hamit, Murat; Yun, Weikang; Yan, Chuanbo; Kutluk, Abdugheni; Fang, Yang; Alip, Elzat

    2015-06-01

    Image feature extraction is an important part of image processing and it is an important field of research and application of image processing technology. Uygur medicine is one of Chinese traditional medicine and researchers pay more attention to it. But large amounts of Uygur medicine data have not been fully utilized. In this study, we extracted the image color histogram feature of herbal and zooid medicine of Xinjiang Uygur. First, we did preprocessing, including image color enhancement, size normalizition and color space transformation. Then we extracted color histogram feature and analyzed them with statistical method. And finally, we evaluated the classification ability of features by Bayes discriminant analysis. Experimental results showed that high accuracy for Uygur medicine image classification was obtained by using color histogram feature. This study would have a certain help for the content-based medical image retrieval for Xinjiang Uygur medicine. PMID:26485983

  12. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  13. Spectral regression based fault feature extraction for bearing accelerometer sensor signals.

    PubMed

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  14. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Gareis, I.; Gentiletti, G.; Acevedo, R.; Rufiner, L.

    2011-09-01

    The purpose of this work is to evaluate different feature extraction alternatives to detect the event related evoked potential signal on brain computer interfaces, trying to minimize the time employed and the classification error, in terms of sensibility and specificity of the method, looking for alternatives to coherent averaging. In this context the results obtained performing the feature extraction using discrete dyadic wavelet transform using different mother wavelets are presented. For the classification a single layer perceptron was used. The results obtained with and without the wavelet decomposition were compared; showing an improvement on the classification rate, the specificity and the sensibility for the feature vectors obtained using some mother wavelets.

  15. Invariant feature extraction for color image mosaic by graph card processing

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Chen, Lin; Li, Deren

    2009-10-01

    Image mosaic can be widely used in remote measuring, scout in battlefield and Panasonic image demonstration. In this project, we find a general method for video (or sequence images) mosaic by techniques, such as extracting invariant features, gpu processing, multi-color feature selection, ransac algorithm for homograph matching. In order to match the image sequence automatically without influence of rotation, scale and contrast transform, local invariant feature descriptor have been extracted by graph card unit. The gpu mosaic algorithm performs very well that can be compare to slow CPU version of mosaic program with little cost time.

  16. Biometric person authentication method using features extracted from pen holding style

    NASA Astrophysics Data System (ADS)

    Hashimoto, Yuuki; Muramatsu, Daigo; Ogata, Hiroyuki

    2010-04-01

    The manner of holding a pen is distinctive among people. Therefore, pen holding style is useful for person authentication. In this paper, we propose a biometric person authentication method using features extracted from images of pen holding style. Images of the pen holding style are captured by a camera, and several features are extracted from the captured images. These features are compared with a reference dataset to calculate dissimilarity scores, and these scores are combined for verification using a three-layer perceptron. Preliminary experiments were performed by using a private database. The proposed system yielded an equal error rate (EER) of 2.6%.

  17. Ultrasonic echo waveshape features extraction based on QPSO-matching pursuit for online wear debris discrimination

    NASA Astrophysics Data System (ADS)

    Xu, Chao; Zhang, Peilin; Wang, Huaiguang; Li, Yining; Lv, Chun

    2015-08-01

    The ultrasonic echoes reflected from debris in lubricant contain a lot of useful information, which can represent the size, material and geometric characteristics of the debris. Our preliminary simulation investigations and physical model analysis results show that the waveshape features are feasible and essential to discriminate debris in lubricant. An accurate waveshape features extraction method of debris echoes is presented based on the matching pursuit (MP). The dictionary of Gabor functions, which is suitable for ultrasonic signal processing, is adopted for MP. To seek faster and more accurate calculation of MP, quantum-behaved particle swarm optimization (QPSO) is introduced to optimize the MP algorithm. The simulation and experimental results reveal that the proposed method can effectively extract the waveshape features of debris echoes and air bubble echoes. Utilizing the extracted waveshape features, the debris with different shapes and air bubble can be distinguished.

  18. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  19. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  20. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  1. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  2. Fault diagnosis of rotating machinery with a novel statistical feature extraction and evaluation method

    NASA Astrophysics Data System (ADS)

    Li, Wei; Zhu, Zhencai; Jiang, Fan; Zhou, Gongbo; Chen, Guoan

    2015-01-01

    Fault diagnosis of rotating machinery is receiving more and more attentions. Vibration signals of rotating machinery are commonly analyzed to extract features of faults, and the features are identified with classifiers, e.g. artificial neural networks (ANNs) and support vector machines (SVMs). Due to nonlinear behaviors and unknown noises in machinery, the extracted features are varying from sample to sample, which may result in false classifications. It is also difficult to analytically ensure the accuracy of fault diagnosis. In this paper, a feature extraction and evaluation method is proposed for fault diagnosis of rotating machinery. Based on the central limit theory, an extraction procedure is given to obtain the statistical features with the help of existing signal processing tools. The obtained statistical features approximately obey normal distributions. They can significantly improve the performance of fault classification, and it is verified by taking ANN and SVM classifiers as examples. Then the statistical features are evaluated with a decoupling technique and compared with thresholds to make the decision on fault classification. The proposed evaluation method only requires simple algebraic computation, and the accuracy of fault classification can be analytically guaranteed in terms of the so-called false classification rate (FCR). An experiment is carried out to verify the effectiveness of the proposed method, where the unbalanced fault of rotor, inner race fault, outer race fault and ball fault of bearings are considered.

  3. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  4. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  5. Feature extraction based on contourlet transform and its application to surface inspection of metals

    NASA Astrophysics Data System (ADS)

    Ai, Yonghao; Xu, Ke

    2012-11-01

    Surface defects that affect the quality of metals are an important factor. Machine vision systems commonly perform surface inspection, and feature extraction of defects is essential. The rapidity and universality of the algorithm are two crucial issues in actual application. A new method of feature extraction based on contourlet transform and kernel locality preserving projections is proposed to extract sufficient and effective features from metal surface images. Image information at certain direction is important to recognition of defects, and contourlet transform is introduced for its flexible direction setting. Images of metal surfaces are decomposed into multiple directional subbands with contourlet transform. Then features of all subbands are extracted and combined into a high-dimensional feature vector, which is reduced to a low-dimensional feature vector by kernel locality preserving projections. The method is tested with a Brodatz database and two surface defect databases from industrial surface-inspection systems of continuous casting slabs and aluminum strips. Experimental results show that the proposed method performs better than the other three methods in accuracy and efficiency. The total classification rates of surface defects of continuous casting slabs and aluminum strips are up to 93.55% and 92.5%, respectively.

  6. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  7. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set.

    PubMed

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  8. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    PubMed Central

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  9. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  10. Focal-plane CMOS wavelet feature extraction for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Olyaei, Ashkan; Genov, Roman

    2005-09-01

    Kernel-based pattern recognition paradigms such as support vector machines (SVM) require computationally intensive feature extraction methods for high-performance real-time object detection in video. The CMOS sensory parallel processor architecture presented here computes delta-sigma (ΔΣ)-modulated Haar wavelet transform on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental oversampling analog-to-digital converters (ADCs). Each ADC performs distributed spatial focal-plane sampling and concurrent weighted average quantization. The architecture is benchmarked in SVM face detection on the MIT CBCL data set. At 90% detection rate, first-level Haar wavelet feature extraction yields a 7.9% reduction in the number of false positives when compared to classification with no feature extraction. The architecture yields 1.4 GMACS simulated computational throughput at SVGA imager resolution at 8-bit output depth.

  11. Evaluation of various feature extraction methods for landmine detection using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Hamdi, Anis; Frigui, Hichem

    2012-06-01

    Hidden Markov Models (HMM) have proved to be eective for detecting buried land mines using data collected by a moving-vehicle-mounted ground penetrating radar (GPR). The general framework for a HMM-based landmine detector consists of building a HMM model for mine signatures and a HMM model for clutter signatures. A test alarm is assigned a condence proportional to the probability of that alarm being generated by the mine model and inversely proportional to its probability in the clutter model. The HMM models are built based on features extracted from GPR training signatures. These features are expected to capture the salient properties of the 3-dimensional alarms in a compact representation. The baseline HMM framework for landmine detection is based on gradient features. It models the time varying behavior of GPR signals, encoded using edge direction information, to compute the likelihood that a sequence of measurements is consistent with a buried landmine. In particular, the HMM mine models learns the hyperbolic shape associated with the signature of a buried mine by three states that correspond to the succession of an increasing edge, a at edge, and a decreasing edge. Recently, for the same application, other features have been used with dierent classiers. In particular, the Edge Histogram Descriptor (EHD) has been used within a K-nearest neighbor classier. Another descriptor is based on Gabor features and has been used within a discrete HMM classier. A third feature, that is closely related to the EHD, is the Bar histogram feature. This feature has been used within a Neural Networks classier for handwritten word recognition. In this paper, we propose an evaluation of the HMM based landmine detection framework with several feature extraction techniques. We adapt and evaluate the EHD, Gabor, Bar, and baseline gradient feature extraction methods. We compare the performance of these features using a large and diverse GPR data collection.

  12. A novel hybrid approach for the extraction of linear/cylindrical features from laser scanning data

    NASA Astrophysics Data System (ADS)

    Lari, Z.; Habib, A.

    2013-10-01

    However, the collected point cloud should undergo manipulation approaches to be utilized for diverse civil, industrial, and military applications. Different processing techniques have consequently been implemented for the extraction of low-level features from this data. Linear/cylindrical features are among the most important primitives that could be extracted from laser scanning data, especially those collected in industrial sites and urban areas. This paper presents a novel approach for the identification, parameterization, and segmentation of these features in a laser point cloud. In the first step of the proposed approach, the points which belong to linear/cylindrical features are detected and their appropriate representation models are chosen based on the principal component analysis of their local neighborhood. The approximate direction and position parameters of the identified linear/cylindrical features are then refined using an iterative line/cylinder fitting procedure. A parameter-domain segmentation method is finally applied to isolate the points which belong to individual linear/cylindrical features in direction and position attribute spaces, respectively. Experimental results from real datasets will demonstrate the feasibility of the proposed approach for the extraction of linear/cylindrical features from laser scanning data.

  13. Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis.

    PubMed

    Chen, Yunhua; Liu, Weijian; Zhang, Ling; Yan, Mingyu; Zeng, Yanjun

    2015-09-01

    Due to an absence of reliable biochemical markers, the diagnosis of chronic fatigue syndrome (CFS) mainly relies on the clinical symptoms, and the experience and skill of the doctors currently. To improve objectivity and reduce work intensity, a hybrid facial feature is proposed. First, several kinds of appearance features are identified in different facial regions according to clinical observations of traditional Chinese medicine experts, including vertical striped wrinkles on the forehead, puffiness of the lower eyelid, the skin colour of the cheeks, nose and lips, and the shape of the mouth corner. Afterwards, such features are extracted and systematically combined to form a hybrid feature. We divide the face into several regions based on twelve active appearance model (AAM) feature points, and ten straight lines across them. Then, Gabor wavelet filtering, CIELab color components, threshold-based segmentation and curve fitting are applied to extract features, and Gabor features are reduced by a manifold preserving projection method. Finally, an AdaBoost based score level fusion of multi-modal features is performed after classification of each feature. Despite that the subjects involved in this trial are exclusively Chinese, the method achieves an average accuracy of 89.04% on the training set and 88.32% on the testing set based on the K-fold cross-validation. In addition, the method also possesses desirable sensitivity and specificity on CFS prediction. PMID:26117650

  14. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/. PMID:26499212

  15. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  16. Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.

    PubMed

    Segovia, F; Górriz, J M; Ramírez, J; Phillips, C; For The Alzheimer's Disease Neuroimaging Initiative

    2016-01-01

    Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods. PMID:26567734

  17. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  18. Extraction, modelling, and use of linear features for restitution of airborne hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lee, Changno; Bethel, James S.

    This paper presents an approach for the restitution of airborne hyperspectral imagery with linear features. The approach consisted of semi-automatic line extraction and mathematical modelling of the linear features. First, the line was approximately determined manually and refined using dynamic programming. The extracted lines could then be used as control data with the ground information of the lines, or as constraints with simple assumption for the ground information of the line. The experimental results are presented numerically in tables of RMS residuals of check points as well as visually in ortho-rectified images.

  19. Transient signal analysis based on Levenberg-Marquardt method for fault feature extraction of rotating machines

    NASA Astrophysics Data System (ADS)

    Wang, Shibin; Cai, Gaigai; Zhu, Zhongkui; Huang, Weiguo; Zhang, Xingwu

    2015-03-01

    Localized faults in rotating machines tend to result in shocks and thus excite transient components in vibration signals. An iterative extraction method is proposed for transient signal analysis based on transient modeling and parameter identification through Levenberg-Marquardt (LM) method, and eventually for fault feature extraction. For each iteration, a double-side asymmetric transient model is firstly built based on parametric Morlet wavelet, and then the LM method is introduced to identify the parameters of the model. With the implementation of the iterative procedure, transients are extracted from vibration signals one by one, and Wigner-Ville Distribution is applied to obtain time-frequency representation with satisfactory energy concentration but without cross-term. A simulation signal is used to test the performance of the proposed method in transient extraction, and the comparison study shows that the proposed method outperforms ensemble empirical mode decomposition and spectral kurtosis in extracting transient feature. Finally, the effectiveness of the proposed method is verified by the applications in transient analysis for bearing and gear fault feature extraction.

  20. Methods of EEG signal features extraction using linear analysis in frequency and time-frequency domains.

    PubMed

    Al-Fahoum, Amjed S; Al-Fraihat, Ausilah A

    2014-01-01

    Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance. PMID:24967316

  1. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  2. Methods of EEG Signal Features Extraction Using Linear Analysis in Frequency and Time-Frequency Domains

    PubMed Central

    Al-Fahoum, Amjed S.; Al-Fraihat, Ausilah A.

    2014-01-01

    Technically, a feature represents a distinguishing property, a recognizable measurement, and a functional component obtained from a section of a pattern. Extracted features are meant to minimize the loss of important information embedded in the signal. In addition, they also simplify the amount of resources needed to describe a huge set of data accurately. This is necessary to minimize the complexity of implementation, to reduce the cost of information processing, and to cancel the potential need to compress the information. More recently, a variety of methods have been widely used to extract the features from EEG signals, among these methods are time frequency distributions (TFD), fast fourier transform (FFT), eigenvector methods (EM), wavelet transform (WT), and auto regressive method (ARM), and so on. In general, the analysis of EEG signal has been the subject of several studies, because of its ability to yield an objective mode of recording brain stimulation which is widely used in brain-computer interface researches with application in medical diagnosis and rehabilitation engineering. The purposes of this paper, therefore, shall be discussing some conventional methods of EEG feature extraction methods, comparing their performances for specific task, and finally, recommending the most suitable method for feature extraction based on performance. PMID:24967316

  3. Application of multi-scale feature extraction to surface defect classification of hot-rolled steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Ai, Yong-hao; Wu, Xiu-yong

    2013-01-01

    Feature extraction is essential to the classification of surface defect images. The defects of hot-rolled steels distribute in different directions. Therefore, the methods of multi-scale geometric analysis (MGA) were employed to decompose the image into several directional subbands at several scales. Then, the statistical features of each subband were calculated to produce a high-dimensional feature vector, which was reduced to a lower-dimensional vector by graph embedding algorithms. Finally, support vector machine (SVM) was used for defect classification. The multi-scale feature extraction method was implemented via curvelet transform and kernel locality preserving projections (KLPP). Experiment results show that the proposed method is effective for classifying the surface defects of hot-rolled steels and the total classification rate is up to 97.33%.

  4. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used. PMID:24805043

  5. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  6. The Acquisition of Consonant Feature Sequences: Harmony, Metathesis, and Deletion Patterns in Phonological Development

    ERIC Educational Resources Information Center

    Gerlach, Sharon Ruth

    2010-01-01

    This dissertation examines three processes affecting consonants in child speech: harmony (long-distance assimilation) involving major place features as in "coat" [kouk]; long-distance metathesis as in "cup" [p[wedge]k]; and initial consonant deletion as in "fish" [is]. These processes are unattested in adult phonology, leading to proposals for…

  7. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, P.; Belmont, P.; Foufoula, E.

    2011-12-01

    High resolution topography derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive landforms in a way that was previously not possible. This provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topography data over large areas come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. This is particularly true in low relief landscapes since the topographic gradients are low and both the landscape and the channel network are often heavily modified by humans. Recently, a comprehensive framework was developed for the automatic extraction of geomorphic features (channel network, channel heads and channel morphology) from high resolution topographic data by combining nonlinear diffusion and geodesic minimization principles. The feature extraction method was packaged in a software called GeoNet (which is publicly available). In this talk, we focus on the application of GeoNet to a variety of landscapes, and, in particular, to flat and engineered landscapes where the method has been recently extended to perform automated channel morphometric analysis (including extraction of cross-sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation) and to differentiate between natural channels and manmade structures (including artificial ditches, roads and bridges across channels).

  8. SD-MSAEs: Promoter recognition in human genome based on deep feature extraction.

    PubMed

    Xu, Wenxuan; Zhang, Li; Lu, Yaping

    2016-06-01

    The prediction and recognition of promoter in human genome play an important role in DNA sequence analysis. Entropy, in Shannon sense, of information theory is a multiple utility in bioinformatic details analysis. The relative entropy estimator methods based on statistical divergence (SD) are used to extract meaningful features to distinguish different regions of DNA sequences. In this paper, we choose context feature and use a set of methods of SD to select the most effective n-mers distinguishing promoter regions from other DNA regions in human genome. Extracted from the total possible combinations of n-mers, we can get four sparse distributions based on promoter and non-promoters training samples. The informative n-mers are selected by optimizing the differentiating extents of these distributions. Specially, we combine the advantage of statistical divergence and multiple sparse auto-encoders (MSAEs) in deep learning to extract deep feature for promoter recognition. And then we apply multiple SVMs and a decision model to construct a human promoter recognition method called SD-MSAEs. Framework is flexible that it can integrate new feature extraction or new classification models freely. Experimental results show that our method has high sensitivity and specificity. PMID:27018214

  9. Effects of a Rhodiola rosea L. extract on acquisition and expression of morphine tolerance and dependence in mice.

    PubMed

    Mattioli, Laura; Perfumi, Marina

    2011-03-01

    This study investigated the effect of Rhodiola rosea L. extract on acquisition and expression of morphine tolerance and dependence in mice. Therefore animals were injected with repeated administration of morphine (10 mg/kg, subcutaneous) twice daily for five or six days, in order to make them tolerant or dependent. Rhodiola rosea L. extract (0, 10, 15 and 20 mg/kg) was administered by the intragastric route 60 min prior to each morphine injection (for acquisition) or prior the last injection of morphine or naloxone on test day (for tolerance or dependence expression, respectively). Morphine tolerance was evaluated by testing its analgesic effect in the tail flick test at the 1st and 5th days. Morphine dependence was evaluated by counting the number of withdrawal signs (jumping, rearing, forepaw tremor, teeth chatter) after naloxone injection (5 mg/kg; intraperitoneal) on the test day (day 6). Results showed that Rhodiola rosea L. extract significantly reduced the expression of morphine tolerance, while it was ineffective in modulating its acquisition. Conversely, Rhodiola rosea L. extract significantly and dose-dependently attenuated both development and expression of morphine dependence after chronic or acute administration. These data suggest that Rhodiola rosea L. may have human therapeutic potential for treatment of opioid addiction. PMID:20142299

  10. Comparative analysis of feature extraction (2D FFT and wavelet) and classification (Lp metric distances, MLP NN, and HNeT) algorithms for SAR imagery

    NASA Astrophysics Data System (ADS)

    Sandirasegaram, Nicholas; English, Ryan

    2005-05-01

    The performance of several combinations of feature extraction and target classification algorithms is analyzed for Synthetic Aperture Radar (SAR) imagery using the standard Moving and Stationary Target Acquisition and Recognition (MSTAR) evaluation method. For feature extraction, 2D Fast Fourier Transform (FFT) is used to extract Fourier coefficients (frequency information) while 2D wavelet decomposition is used to extract wavelet coefficients (time-frequency information), from which subsets of characteristic in-class "invariant" coefficients are developed. Confusion matrices and Receiver Operating Characteristic (ROC) curves are used to evaluate and compare combinations of these characteristic coefficients with several classification methods, including Lp metric distances, a Multi Layer Perceptron (MLP) Neural Network (NN) and AND Corporation's Holographic Neural Technology (HNeT) classifier. The evaluation method examines the trade-off between correct detection rate and false alarm rate for each combination of feature-classifier systems. It also measures correct classification, misclassification and rejection rates for a 90% detection rate. Our analysis demonstrates the importance of feature and classifier selection in accurately classifying new target images.

  11. Feature extraction using adaptive multiwavelets and synthetic detection index for rotor fault diagnosis of rotating machinery

    NASA Astrophysics Data System (ADS)

    Lu, Na; Xiao, Zhihuai; Malik, O. P.

    2015-02-01

    State identification to diagnose the condition of rotating machinery is often converted to a classification problem of values of non-dimensional symptom parameters (NSPs). To improve the sensitivity of the NSPs to the changes in machine condition, a novel feature extraction method based on adaptive multiwavelets and the synthetic detection index (SDI) is proposed in this paper. Based on the SDI maximization principle, optimal multiwavelets are searched by genetic algorithms (GAs) from an adaptive multiwavelets library and used for extracting fault features from vibration signals. By the optimal multiwavelets, more sensitive NSPs can be extracted. To examine the effectiveness of the optimal multiwavelets, conventional methods are used for comparison study. The obtained NSPs are fed into K-means classifier to diagnose rotor faults. The results show that the proposed method can effectively improve the sensitivity of the NSPs and achieve a higher discrimination rate for rotor fault diagnosis than the conventional methods.

  12. Influence of confocal scanning laser microscopy specific acquisition parameters on the detection and matching of speeded-up robust features.

    PubMed

    Stanciu, Stefan G; Hristu, Radu; Stanciu, George A

    2011-04-01

    The robustness and distinctiveness of local features to various object or scene deformations and to modifications of the acquisition parameters play key roles in the design of many computer vision applications. In this paper we present the results of our experiments on the behavior of a recently developed technique for local feature detection and description, Speeded-Up Robust Features (SURF), regarding image modifications specific to Confocal Scanning Laser Microscopy (CSLM). We analyze the repeatability of detected SURF keypoints and the precision-recall of their matching under modifications of three important CSLM parameters: pinhole aperture, photomultiplier (PMT) gain and laser beam power. During any investigation by CSLM these three parameters have to be modified, individually or together, in order to optimize the contrast and the Signal Noise Ratio (SNR), being also inherently modified when changing the microscope objective. Our experiments show that an important amount of SURF features can be detected at the same physical locations in images collected at different values of the pinhole aperture, PMT gain and laser beam power, and further on can be successfully matched based on their descriptors. In the final part, we exemplify the potential of SURF in CSLM imaging by presenting a SURF-based computer vision application that deals with the mosaicing of images collected by this technique. PMID:21349249

  13. Features extraction of EMG signal using time domain analysis for arm rehabilitation device

    NASA Astrophysics Data System (ADS)

    Jali, Mohd Hafiz; Ibrahim, Iffah Masturah; Sulaima, Mohamad Fani; Bukhari, W. M.; Izzuddin, Tarmizi Ahmad; Nasir, Mohamad Na'im

    2015-05-01

    Rehabilitation device is used as an exoskeleton for people who had failure of their limb. Arm rehabilitation device may help the rehab program whom suffers from arm disability. The device that is used to facilitate the tasks of the program should improve the electrical activity in the motor unit and minimize the mental effort of the user. Electromyography (EMG) is the techniques to analyze the presence of electrical activity in musculoskeletal systems. The electrical activity in muscles of disable person is failed to contract the muscle for movements. In order to prevent the muscles from paralysis becomes spasticity, the force of movements should minimize the mental efforts. Therefore, the rehabilitation device should analyze the surface EMG signal of normal people that can be implemented to the device. The signal is collected according to procedure of surface electromyography for non-invasive assessment of muscles (SENIAM). The EMG signal is implemented to set the movements' pattern of the arm rehabilitation device. The filtered EMG signal was extracted for features of Standard Deviation (STD), Mean Absolute Value (MAV) and Root Mean Square (RMS) in time-domain. The extraction of EMG data is important to have the reduced vector in the signal features with less of error. In order to determine the best features for any movements, several trials of extraction methods are used by determining the features with less of errors. The accurate features can be use for future works of rehabilitation control in real-time.

  14. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  15. Multimodal approach to feature extraction for image and signal learning problems

    NASA Astrophysics Data System (ADS)

    Eads, Damian R.; Williams, Steven J.; Theiler, James; Porter, Reid; Harvey, Neal R.; Perkins, Simon J.; Brumby, Steven P.; David, Nancy A.

    2004-01-01

    We present ZEUS, an algorithm for extracting features from images and time series signals. ZEIS is designed to solve a variety of machine learning problems including time series forecasting, signal classification, image and pixel classification of multispectral and panchromatic imagery. An evolutionary approach is used to extract features from a near-infinite space of possible combinations of nonlinear operators. Each problem type (i.e. signal or image, regression or classification, multiclass or binary) has its own set of primitive operators. We employ fairly generic operators, but note that the choice of which operators to use provides an opportunity to consult with a domain expert. Each feature is produced from a composition of some subset of these primitive operators. The fitness for an evolved set of features is given by the performance of a back-end classifier (or regressor) on training data. We demonstrate our multimodal approach to feature extraction on a variety of problems in remote sensing. The performance of this algorithm will be compared to standard approaches, and the relative benefit of various aspects of the algorithm will be investigated.

  16. A Segmentation-Based Method to Extract Structural and Evolutionary Features for Protein Fold Recognition.

    PubMed

    Dehzangi, Abdollah; Paliwal, Kuldip; Lyons, James; Sharma, Alok; Sattar, Abdul

    2014-01-01

    Protein fold recognition (PFR) is considered as an important step towards the protein structure prediction problem. Despite all the efforts that have been made so far, finding an accurate and fast computational approach to solve the PFR still remains a challenging problem for bioinformatics and computational biology. In this study, we propose the concept of segmented-based feature extraction technique to provide local evolutionary information embedded in position specific scoring matrix (PSSM) and structural information embedded in the predicted secondary structure of proteins using SPINE-X. We also employ the concept of occurrence feature to extract global discriminatory information from PSSM and SPINE-X. By applying a support vector machine (SVM) to our extracted features, we enhance the protein fold prediction accuracy for 7.4 percent over the best results reported in the literature. We also report 73.8 percent prediction accuracy for a data set consisting of proteins with less than 25 percent sequence similarity rates and 80.7 percent prediction accuracy for a data set with proteins belonging to 110 folds with less than 40 percent sequence similarity rates. We also investigate the relation between the number of folds and the number of features being used and show that the number of features should be increased to get better protein fold prediction results when the number of folds is relatively large. PMID:26356019

  17. Neurocognitive disorder detection based on feature vectors extracted from VBM analysis of structural MRI.

    PubMed

    Savio, A; García-Sebastián, M T; Chyzyk, D; Hernandez, C; Graña, M; Sistiaga, A; López de Munain, A; Villanúa, J

    2011-08-01

    Dementia is a growing concern due to the aging process of the western societies. Non-invasive detection is therefore a high priority research endeavor. In this paper we report results of classification systems applied to the feature vectors obtained by a feature extraction method computed on structural magnetic resonance imaging (sMRI) volumes for the detection of two neurological disorders with cognitive impairment: myotonic dystrophy of type 1 (MD1) and Alzheimer disease (AD). The feature extraction process is based on the voxel clusters detected by voxel-based morphometry (VBM) analysis of sMRI upon a set of patient and control subjects. This feature extraction process is specific for each kind of disease and is grounded on the findings obtained by medical experts. The 10-fold cross-validation results of several statistical and neural network based classification algorithms trained and tested on these features show high specificity and moderate sensitivity of the classifiers, suggesting that the approach is better suited for rejecting than for detecting early stages of the diseases. PMID:21621760

  18. A Novel Hyperspectral Feature-Extraction Algorithm Based on Waveform Resolution for Raisin Classification.

    PubMed

    Zhao, Yun; Xu, Xing; He, Yong

    2015-12-01

    Near-infrared hyperspectral imaging technology was adopted in this study to discriminate among varieties of raisins produced in Xinjiang Uygur Autonomous Region, China. Eight varieties of raisins were used in the research, and the wavelengths of the hyperspectral images were from 900 to 1700 nm. A novel waveform resolution method is proposed to reduce the hyperspectral data and extract the features. The waveform-resolution method compresses the original hyperspectral data for one pixel into five amplitudes, five frequencies, and five phases for 15 feature values in all. A neural network was established with three layers-eight neurons for the first layer, three neurons for the hidden layer, and one neuron for the output layer-based on the 15 features used to determine the varieties of raisins. The accuracies of the model, which are presented as sensitivity, precision, and specificity, for the testing data set, are 93.38, 81.92, and 99.06%. This is higher than the accuracies of the model using a conventional principal component analysis feature-extracting method combined with a neural network, which has a sensitivity of 82.13%, precision of 82.22%, and specificity of 97.45%. The results indicate that the proposed waveform-resolution feature-extracting method combined with hyperspectral imaging technology is an efficient method for determining varieties of raisins. PMID:26555391

  19. A Feature Extraction Method for Fault Classification of Rolling Bearing based on PCA

    NASA Astrophysics Data System (ADS)

    Wang, Fengtao; Sun, Jian; Yan, Dawen; Zhang, Shenghua; Cui, Liming; Xu, Yong

    2015-07-01

    This paper discusses the fault feature selection using principal component analysis (PCA) for bearing faults classification. Multiple features selected from the time-frequency domain parameters of vibration signals are analyzed. First, calculate the time domain statistical features, such as root mean square and kurtosis; meanwhile, by Fourier transformation and Hilbert transformation, the frequency statistical features are extracted from the frequency spectrum. Then the PCA is used to reduce the dimension of feature vectors drawn from raw vibration signals, which can improve real time performance and accuracy of the fault diagnosis. Finally, a fuzzy C-means (FCM) model is established to implement the diagnosis of rolling bearing faults. Practical rolling bearing experiment data is used to verify the effectiveness of the proposed method.

  20. Single trial EEG classification applied to a face recognition experiment using different feature extraction methods.

    PubMed

    Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei

    2015-08-01

    Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition. PMID:26737964

  1. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  2. Distortion tolerant pattern recognition based on self-organizing feature extraction.

    PubMed

    Lampinen, J; Oja, E

    1995-01-01

    A generic, modular, neural network-based feature extraction and pattern classification system is proposed for finding essentially two-dimensional objects or object parts from digital images in a distortion tolerant manner, The distortion tolerance is built up gradually by successive blocks in a pipeline architecture. The system consists of only feedforward neural networks, allowing efficient parallel implementation. The most time and data-consuming stage, learning the relevant features, is wholly unsupervised and can be made off-line. The consequent supervised stage where the object classes are learned is simple and fast. The feature extraction is based on distortion tolerant Gabor transformations, followed by minimum distortion clustering by multilayer self-organizing maps. Due to the unsupervised learning strategy, there is no need for preclassified training samples or other explicit selection for training patterns during the training, which allows a large amount of training material to be used at the early stages, A supervised, one-layer subspace network classifier on top of the feature extractor is used for object labeling. The system has been trained with natural images giving the relevant features, and human faces and their parts have been used as the object classes for testing. The current experiments indicate that the feature space has sufficient resolution power for a moderate number of classes with rather strong distortions. PMID:18263341

  3. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    PubMed

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database. PMID:25680220

  4. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  5. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  6. Dynamic-Feature Extraction, Attribution and Reconstruction (DEAR) Method for Power System Model Reduction

    SciTech Connect

    Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.

    2014-09-04

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.

  7. Automatic feature extraction for panchromatic Mars Global Surveyor Mars Orbiter camera imagery

    NASA Astrophysics Data System (ADS)

    Plesko, Catherine S.; Brumby, Steven P.; Leovy, Conway B.

    2002-01-01

    The Mars Global Surveyor Mars Orbiter Camera (MOC) has produced tens of thousands of images, which contain a wealth of information about the surface of the planet Mars. Current manual analysis techniques are inadequate for the comprehensive analysis of such a large dataset, while development of handwritten feature extraction algorithms is laborious and expensive. This project investigates application of an automatic feature extraction approach to analysis of the MOC narrow angle panchromatic dataset, using an evolutionary computation software package called GENIE. GENIE uses a genetic algorithm to assemble feature extraction tools from low-level image operators. Each generated tool is evaluated against training data provided by the user. The best tools in each generation are allowed to 'reproduce' to produce the next generation, and the population of tools is permitted to evolve until it converges to a solution or reaches a level of performance specified by the user. Craters are one of the most scientifically interesting and most numerous features in the MOC data set, and present a wide range of shapes at many spatial scales. We now describe preliminary results on development of a crater finder algorithm using the GENIE software.

  8. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    PubMed Central

    Wang, Kun-Ching

    2014-01-01

    In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS). This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII) derived from the spectrogram image can be extracted by using Laws' masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB) and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB), to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM) as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D) TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech. PMID:25207869

  9. The feature extraction based on texture image information for emotion sensing in speech.

    PubMed

    Wang, Kun-Ching

    2014-01-01

    In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS). This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII) derived from the spectrogram image can be extracted by using Laws' masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB) and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB), to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM) as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D) TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech. PMID:25207869

  10. Classification of osteosarcoma T-ray responses using adaptive and rational wavelets for feature extraction

    NASA Astrophysics Data System (ADS)

    Ng, Desmond; Wong, Fu Tian; Withayachumnankul, Withawat; Findlay, David; Ferguson, Bradley; Abbott, Derek

    2007-12-01

    In this work we investigate new feature extraction algorithms on the T-ray response of normal human bone cells and human osteosarcoma cells. One of the most promising feature extraction methods is the Discrete Wavelet Transform (DWT). However, the classification accuracy is dependant on the specific wavelet base chosen. Adaptive wavelets circumvent this problem by gradually adapting to the signal to retain optimum discriminatory information, while removing redundant information. Using adaptive wavelets, classification accuracy, using a quadratic Bayesian classifier, of 96.88% is obtained based on 25 features. In addition, the potential of using rational wavelets rather than the standard dyadic wavelets in classification is explored. The advantage it has over dyadic wavelets is that it allows a better adaptation of the scale factor according to the signal. An accuracy of 91.15% is obtained through rational wavelets with 12 coefficients using a Support Vector Machine (SVM) as the classifier. These results highlight adaptive and rational wavelets as an efficient feature extraction method and the enormous potential of T-rays in cancer detection.

  11. Statistical Methods for Proteomic Biomarker Discovery based on Feature Extraction or Functional Modeling Approaches*

    PubMed Central

    Morris, Jeffrey S.

    2012-01-01

    In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry (Cromwell) and 2D gel electrophoresis (Pinnacle) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods

  12. Statistical Methods for Proteomic Biomarker Discovery based on Feature Extraction or Functional Modeling Approaches.

    PubMed

    Morris, Jeffrey S

    2012-01-01

    In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry (Cromwell) and 2D gel electrophoresis (Pinnacle) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods

  13. Efficient Markov feature extraction method for image splicing detection using maximization and threshold expansion

    NASA Astrophysics Data System (ADS)

    Han, Jong Goo; Park, Tae Hee; Moon, Yong Ho; Eom, Il Kyu

    2016-03-01

    We propose an efficient Markov feature extraction method for color image splicing detection. The maximum value among the various directional difference values in the discrete cosine transform domain of three color channels is used to choose the Markov features. We show that the discriminability for slicing detection is increased through the maximization process from the point of view of the Kullback-Leibler divergence. In addition, we present a threshold expansion and Markov state decomposition algorithm. Threshold expansion reduces the information loss caused by the coefficient thresholding that is used to restrict the number of Markov features. To compensate the increased number of features due to the threshold expansion, we propose an even-odd Markov state decomposition algorithm. A fixed number of features, regardless of the difference directions, color channels and test datasets, are used in the proposed algorithm. We introduce three kinds of Markov feature vectors. The number of Markov features for splicing detection used in this paper is relatively small compared to the conventional methods, and our method does not require additional feature reduction algorithms. Through experimental simulations, we demonstrate that the proposed method achieves high performance in splicing detection.

  14. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features

    NASA Astrophysics Data System (ADS)

    Li, Bangyu; Zhang, Hui; Xu, Fanjiang

    2014-03-01

    This paper addresses the problem of water extraction from high resolution remote sensing images (including R, G, B, and NIR channels), which draws considerable attention in recent years. Previous work on water extraction mainly faced two difficulties. 1) It is difficult to obtain accurate position of water boundary because of using low resolution images. 2) Like all other image based object classification problems, the phenomena of "different objects same image" or "different images same object" affects the water extraction. Shadow of elevated objects (e.g. buildings, bridges, towers and trees) scattered in the remote sensing image is a typical noise objects for water extraction. In many cases, it is difficult to discriminate between water and shadow in a remote sensing image, especially in the urban region. We propose a water extraction method with two hierarchies: the statistical feature of spectral characteristic based on image segmentation and the shape feature based on shadow removing. In the first hierarchy, the Statistical Region Merging (SRM) algorithm is adopted for image segmentation. The SRM includes two key steps: one is sorting adjacent regions according to a pre-ascertained sort function, and the other one is merging adjacent regions based on a pre-ascertained merging predicate. The sort step is done one time during the whole processing without considering changes caused by merging which may cause imprecise results. Therefore, we modify the SRM with dynamic sort processing, which conducts sorting step repetitively when there is large adjacent region changes after doing merging. To achieve robust segmentation, we apply the merging region with six features (four remote sensing image bands, Normalized Difference Water Index (NDWI), and Normalized Saturation-value Difference Index (NSVDI)). All these features contribute to segment image into region of object. NDWI and NSVDI are discriminate between water and some shadows. In the second hierarchy, we adopt

  15. Human action classification using adaptive key frame interval for feature extraction

    NASA Astrophysics Data System (ADS)

    Lertniphonphan, Kanokphan; Aramvith, Supavadee; Chalidabhongse, Thanarat H.

    2016-01-01

    Human action classification based on the adaptive key frame interval (AKFI) feature extraction is presented. Since human movement periods are different, the action intervals that contain the intensive and compact motion information are considered in this work. We specify AKFI by analyzing an amount of motion through time. The key frame is defined to be the local minimum interframe motion, which is computed by using frame differencing between consecutive frames. Once key frames are detected, the features within a segmented period are encoded by adaptive motion history image and key pose history image. The action representation consists of the local orientation histogram of the features during AKFI. The experimental results on Weizmann dataset, KTH dataset, and UT Interaction dataset demonstrate that the features can effectively classify action and can classify irregular cases of walking compared to other well-known algorithms.

  16. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals

    NASA Astrophysics Data System (ADS)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L.

    2016-05-01

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  17. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  18. Topology-based Simplification for Feature Extraction from 3D Scalar Fields

    SciTech Connect

    Gyulassy, A; Natarajan, V; Pascucci, V; Bremer, P; Hamann, B

    2005-10-13

    This paper describes a topological approach for simplifying continuous functions defined on volumetric domains. We present a combinatorial algorithm that simplifies the Morse-Smale complex by repeated application of two atomic operations that removes pairs of critical points. The Morse-Smale complex is a topological data structure that provides a compact representation of gradient flows between critical points of a function. Critical points paired by the Morse-Smale complex identify topological features and their importance. The simplification procedure leaves important critical points untouched, and is therefore useful for extracting desirable features. We also present a visualization of the simplified topology.

  19. Extracting the driving force from ozone data using slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, Geli; Yang, Peicai; Zhou, Xiuji

    2016-05-01

    Slow feature analysis (SFA) is a recommended technique for extracting slowly varying features from a quickly varying signal. In this work, we apply SFA to total ozone data from Arosa, Switzerland. The results show that the signal of volcanic eruptions can be found in the driving force, and wavelet analysis of this driving force shows that there are two main dominant scales, which may be connected with the effect of climate mode such as North Atlantic Oscillation (NAO) and solar activity. The findings of this study represent a contribution to our understanding of the causality from observed climate data.

  20. Learning How to Extract Rotation-Invariant and Scale-Invariant Features from Texture Images

    NASA Astrophysics Data System (ADS)

    Montoya-Zegarra, Javier A.; Papa, João Paulo; Leite, Neucimar J.; da Silva Torres, Ricardo; Falcão, Alexandre

    2008-12-01

    Learning how to extract texture features from noncontrolled environments characterized by distorted images is a still-open task. By using a new rotation-invariant and scale-invariant image descriptor based on steerable pyramid decomposition, and a novel multiclass recognition method based on optimum-path forest, a new texture recognition system is proposed. By combining the discriminating power of our image descriptor and classifier, our system uses small-size feature vectors to characterize texture images without compromising overall classification rates. State-of-the-art recognition results are further presented on the Brodatz data set. High classification rates demonstrate the superiority of the proposed system.

  1. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  2. Crystal structure of a lipoxygenase from Cyanothece sp. may reveal novel features for substrate acquisition.

    PubMed

    Newie, Julia; Andreou, Alexandra; Neumann, Piotr; Einsle, Oliver; Feussner, Ivo; Ficner, Ralf

    2016-02-01

    In eukaryotes, oxidized PUFAs, so-called oxylipins, are vital signaling molecules. The first step in their biosynthesis may be catalyzed by a lipoxygenase (LOX), which forms hydroperoxides by introducing dioxygen into PUFAs. Here we characterized CspLOX1, a phylogenetically distant LOX family member from Cyanothece sp. PCC 8801 and determined its crystal structure. In addition to the classical two domains found in plant, animal, and coral LOXs, we identified an N-terminal helical extension, reminiscent of the long α-helical insertion in Pseudomonas aeruginosa LOX. In liposome flotation studies, this helical extension, rather than the β-barrel domain, was crucial for a membrane binding function. Additionally, CspLOX1 could oxygenate 1,2-diarachidonyl-sn-glycero-3-phosphocholine, suggesting that the enzyme may act directly on membranes and that fatty acids bind to the active site in a tail-first orientation. This binding mode is further supported by the fact that CspLOX1 catalyzed oxygenation at the n-10 position of both linoleic and arachidonic acid, resulting in 9R- and 11R-hydroperoxides, respectively. Together these results reveal unifying structural features of LOXs and their function. While the core of the active site is important for lipoxygenation and thus highly conserved, peripheral domains functioning in membrane and substrate binding are more variable. PMID:26667668

  3. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  4. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  5. SAR Image Segmentation Using Voronoi Tessellation and Bayesian Inference Applied to Dark Spot Feature Extraction

    PubMed Central

    Zhao, Quanhua; Li, Yu; Liu, Zhenggang

    2013-01-01

    This paper presents a new segmentation-based algorithm for oil spill feature extraction from Synthetic Aperture Radar (SAR) intensity images. The proposed algorithm combines a Voronoi tessellation, Bayesian inference and Markov Chain Monte Carlo (MCMC) scheme. The shape and distribution features of dark spots can be obtained by segmenting a scene covering an oil spill and/or look-alikes into two homogenous regions: dark spots and their marine surroundings. The proposed algorithm is applied simultaneously to several real SAR intensity images and simulated SAR intensity images which are used for accurate evaluation. The results show that the proposed algorithm can extract the shape and distribution parameters of dark spot areas, which are useful for recognizing oil spills in a further classification stage. PMID:24233074

  6. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    SciTech Connect

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  7. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.; Suhardjo; Susanto, Adhi

    2013-04-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  8. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, Paola; Belmont, Patrick; Foufoula-Georgiou, Efi

    2012-03-01

    High-resolution topographic data derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive areas in a way that was previously not possible. Availability of this data provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models, and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topographic data come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. Low-relief landscapes are particularly challenging because topographic gradients are low, and in many places both the landscape and the channel network have been heavily modified by humans. This is especially true for agricultural landscapes, which dominate the midwestern United States. The goal of this work is to address several issues related to feature extraction in flat lands by using GeoNet, a recently developed method based on nonlinear multiscale filtering and geodesic optimization for automatic extraction of geomorphic features (channel heads and channel networks) from high-resolution topographic data. Here we test the ability of GeoNet to extract channel networks in flat and human-impacted landscapes using 3 m lidar data for the Le Sueur River Basin, a 2880 km2 subbasin of the Minnesota River Basin. We propose a curvature analysis to differentiate between channels and manmade structures that are not part of the river network, such as roads and bridges. We document that Laplacian curvature more effectively distinguishes channels in flat, human-impacted landscapes compared with geometric curvature. In addition, we develop a method for performing automated channel morphometric analysis including extraction of cross sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation. Using

  9. A Study of Various Feature Extraction Methods on a Motor Imagery Based Brain Computer Interface System

    PubMed Central

    Resalat, Seyed Navid; Saba, Valiallah

    2016-01-01

    Introduction: Brain Computer Interface (BCI) systems based on Movement Imagination (MI) are widely used in recent decades. Separate feature extraction methods are employed in the MI data sets and classified in Virtual Reality (VR) environments for real-time applications. Methods: This study applied wide variety of features on the recorded data using Linear Discriminant Analysis (LDA) classifier to select the best feature sets in the offline mode. The data set was recorded in 3-class tasks of the left hand, the right hand, and the foot motor imagery. Results: The experimental results showed that Auto-Regressive (AR), Mean Absolute Value (MAV), and Band Power (BP) features have higher accuracy values,75% more than those for the other features. Discussion: These features were selected for the designed real-time navigation. The corresponding results revealed the subject-specific nature of the MI-based BCI system; however, the Power Spectral Density (PSD) based α-BP feature had the highest averaged accuracy. PMID:27303595

  10. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  11. Modular implementation of feature extraction and matching algorithms for photogrammetric stereo imagery

    NASA Astrophysics Data System (ADS)

    Kershaw, James; Hamlyn, Garry

    1994-06-01

    This paper describes the implementation of algorithms for automatically extracting and matching features in stereo pairs of images. The implementation has been designed to be as modular as possible to allow different algorithms for each stage in the matching process to be combined in the most appropriate manner for each particular problem. The modules have been implemented in the AVS environment but are designed to be portable to any platform. This work has been undertaken as part of task DEF 93/1 63 'Intelligence Analysis of Imagery', and forms part of ITD's contribution to the Visual Processing research program in the Centre for Sensor System and Information Processing. A major aim of both the task and the research program is to produce software to assist intelligence analysts in extracting three dimensional shape from imagery: the algorithms and software described here will form the first part of a module for automatically extracting depth information from stereo image pairs.

  12. Feature extraction and pattern classification of colorectal polyps in colonoscopic imaging.

    PubMed

    Fu, Jachih J C; Yu, Ya-Wen; Lin, Hong-Mau; Chai, Jyh-Wen; Chen, Clayton Chi-Chang

    2014-06-01

    A computer-aided diagnostic system for colonoscopic imaging has been developed to classify colorectal polyps by type. The modules of the proposed system include image enhancement, feature extraction, feature selection and polyp classification. Three hundred sixty-five images (214 with hyperplastic polyps and 151 with adenomatous polyps) were collected from a branch of a medical center in central Taiwan. The raw images were enhanced by the principal component transform (PCT). The features of texture analysis, spatial domain and spectral domain were extracted from the first component of the PCT. Sequential forward selection (SFS) and sequential floating forward selection (SFFS) were used to select the input feature vectors for classification. Support vector machines (SVMs) were employed to classify the colorectal polyps by type. The classification performance was measured by the Az values of the Receiver Operating Characteristic curve. For all 180 features used as input vectors, the test data set yielded Az values of 88.7%. The Az value was increased by 2.6% (from 88.7% to 91.3%) and 4.4% (from 88.7% to 93.1%) for the features selected by the SFS and the SFFS, respectively. The SFS and the SFFS reduced the dimension of the input vector by 57.2% and 73.8%, respectively. The SFFS outperformed the SFS in both the reduction of the dimension of the feature vector and the classification performance. When the colonoscopic images were visually inspected by experienced physicians, the accuracy of detecting polyps by types was around 85%. The accuracy of the SFFS with the SVM classifier reached 96%. The classification performance of the proposed system outperformed the conventional visual inspection approach. Therefore, the proposed computer-aided system could be used to improve the quality of colorectal polyp diagnosis. PMID:24495469

  13. Bone age assessment in young children using automatic carpal bone feature extraction and support vector regression.

    PubMed

    Somkantha, Krit; Theera-Umpon, Nipon; Auephanwiriyakul, Sansanee

    2011-12-01

    Boundary extraction of carpal bone images is a critical operation of the automatic bone age assessment system, since the contrast between the bony structure and soft tissue are very poor. In this paper, we present an edge following technique for boundary extraction in carpal bone images and apply it to assess bone age in young children. Our proposed technique can detect the boundaries of carpal bones in X-ray images by using the information from the vector image model and the edge map. Feature analysis of the carpal bones can reveal the important information for bone age assessment. Five features for bone age assessment are calculated from the boundary extraction result of each carpal bone. All features are taken as input into the support vector regression (SVR) that assesses the bone age. We compare the SVR with the neural network regression (NNR). We use 180 images of carpal bone from a digital hand atlas to assess the bone age of young children from 0 to 6 years old. Leave-one-out cross validation is used for testing the efficiency of the techniques. The opinions of the skilled radiologists provided in the atlas are used as the ground truth in bone age assessment. The SVR is able to provide more accurate bone age assessment results than the NNR. The experimental results from SVR are very close to the bone age assessment by skilled radiologists. PMID:21347746

  14. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  15. Automatic Road Area Extraction from Printed Maps Based on Linear Feature Detection

    NASA Astrophysics Data System (ADS)

    Callier, Sebastien; Saito, Hideo

    Raster maps are widely available in the everyday life, and can contain a huge amount of information of any kind using labels, pictograms, or color code e.g. However, it is not an easy task to extract roads from those maps due to those overlapping features. In this paper, we focus on an automated method to extract roads by using linear features detection to search for seed points having a high probability to belong to roads. Those linear features are lines of pixels of homogenous color in each direction around each pixel. After that, the seeds are then expanded before choosing to keep or to discard the extracted element. Because this method is not mainly based on color segmentation, it is also suitable for handwritten maps for example. The experimental results demonstrate that in most cases our method gives results similar to usual methods without needing any previous data or user input, but do need some knowledge on the target maps; and does work with handwritten maps if drawn following some basic rules whereas usual methods fail.

  16. Exploration of Genetic Programming Optimal Parameters for Feature Extraction from Remote Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Gao, P.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation is used for improved information extraction from high-resolution satellite imagery. The utilization of evolutionary computation is based on stochastic selection of input parameters often defined in a trial-and-error approach. However, exploration of optimal input parameters can yield improved candidate solutions while requiring reduced computation resources. In this study, the design and implementation of a system that investigates the optimal input parameters was researched in the problem of feature extraction from remotely sensed imagery. The two primary assessment criteria were the highest fitness value and the overall computational time. The parameters explored include the population size and the percentage and order of mutation and crossover. The proposed system has two major subsystems; (i) data preparation: the generation of random candidate solutions; and (ii) data processing: evolutionary process based on genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background of remote sensed imagery. The results demonstrate that the optimal generation number is around 1500, the optimal percentage of mutation and crossover ranges from 35% to 40% and 5% to 0%, respectively. Based on our findings the sequence that yielded better results was mutation over crossover. These findings are conducive to improving the efficacy of utilizing genetic programming for feature extraction from remotely sensed imagery.

  17. Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua

    2013-01-01

    Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis. PMID:24139134

  18. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  19. Extracting Features from an Electrical Signal of a Non-Intrusive Load Monitoring System

    NASA Astrophysics Data System (ADS)

    Figueiredo, Marisa B.; de Almeida, Ana; Ribeiro, Bernardete; Martins, António

    Improving energy efficiency by monitoring household electrical consumption is of significant importance with the present-day climate change concerns. A solution for the electrical consumption management problem is the use of a non-intrusive load monitoring system (NILM). This system captures the signals from the aggregate consumption, extracts the features from these signals and classifies the extracted features in order to identify the switched on appliances. An effective device identification (ID) requires a signature to be assigned for each appliance. Moreover, to specify an ID for each device, signal processing techniques are needed for extracting the relevant features. This paper describes a technique for the steady-states recognition in an electrical digital signal as the first stage for the implementation of an innovative NILM. Furthermore, the final goal is to develop an intelligent system for the identification of the appliances by automated learning. The proposed approach is based on the ratio value between rectangular areas defined by the signal samples. The computational experiments show the method effectiveness for the accurate steady-states identification in the electrical input signals.

  20. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate. PMID:25649845

  1. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    PubMed Central

    Ma, Zaichao; Wen, Guangrui; Jiang, Cheng

    2015-01-01

    Empirical Mode Decomposition (EMD), due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD), has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD) integrating Phase Space Reconstruction (PSR) and Manifold Learning (ML) for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise. PMID:25871723

  2. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  3. Feature extraction of kernel regress reconstruction for fault diagnosis based on self-organizing manifold learning

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Xu, Guanghua; Liu, Dan

    2013-09-01

    The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension. Currently, nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings, such as manifold learning. However, these methods are all based on manual intervention, which have some shortages in stability, and suppressing the disturbance noise. To extract features automatically, a manifold learning method with self-organization mapping is introduced for the first time. Under the non-uniform sample distribution reconstructed by the phase space, the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation. Finally, the signal is reconstructed by the kernel regression. Several typical states include the Lorenz system, engine fault with piston pin defect, and bearing fault with outer-race defect are analyzed. Compared with the LTSA and continuous wavelet transform, the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified. A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.

  4. Early detection and classification of powdery mildew-infected rose leaves using ANFIS based on extracted features of thermal images

    NASA Astrophysics Data System (ADS)

    Jafari, Mehrnoosh; Minaei, Saeid; Safaie, Naser; Torkamani-Azar, Farah

    2016-05-01

    Spatial and temporal changes in surface temperature of infected and non-infected rose plant (Rosa hybrida cv. 'Angelina') leaves were visualized using digital infrared thermography. Infected areas exhibited a presymptomatic decrease in leaf temperature up to 2.3 °C. In this study, two experiments were conducted: one in the greenhouse (semi-controlled ambient conditions) and the other, in a growth chamber (controlled ambient conditions). Effect of drought stress and darkness on the thermal images were also studied in this research. It was found that thermal histograms of the infected leaves closely follow a standard normal distribution. They have a skewness near zero, kurtosis under 3, standard deviation larger than 0.6, and a Maximum Temperature Difference (MTD) more than 4. For each thermal histogram, central tendency, variability, and parameters of the best fitted Standard Normal and Laplace distributions were estimated. To classify healthy and infected leaves, feature selection was conducted and the best extracted thermal features with the largest linguistic hedge values were chosen. Among those features independent of absolute temperature measurement, MTD, SD, skewness, R2l, kurtosis and bn were selected. Then, a neuro-fuzzy classifier was trained to recognize the healthy leaves from the infected ones. The k-means clustering method was utilized to obtain the initial parameters and the fuzzy "if-then" rules. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters. Results showed that drought stress had an adverse effect on the classification of healthy leaves. More healthy leaves under drought stress condition were classified as infected causing PPV and Specificity index values to decrease, accordingly. Image acquisition in the dark had no significant effect on the classification performance.

  5. DBSCAN-based ROI extracted from SAR images and the discrimination of multi-feature ROI

    NASA Astrophysics Data System (ADS)

    He, Xin Yi; Zhao, Bo; Tan, Shu Run; Zhou, Xiao Yang; Jiang, Zhong Jin; Cui, Tie Jun

    2009-10-01

    The purpose of the paper is to extract the region of interest (ROI) from the coarse detected synthetic aperture radar (SAR) images and discriminate if the ROI contains a target or not, so as to eliminate the false alarm, and prepare for the target recognition. The automatic target clustering is one of the most difficult tasks in the SAR-image automatic target recognition system. The density-based spatial clustering of applications with noise (DBSCAN) relies on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN was first used in the SAR image processing, which has many excellent features: only two insensitivity parameters (radius of neighborhood and minimum number of points) are needed; clusters of arbitrary shapes which fit in with the coarse detected SAR images can be discovered; and the calculation time and memory can be reduced. In the multi-feature ROI discrimination scheme, we extract several target features which contain the geometry features such as the area discriminator and Radon-transform based target profile discriminator, the distribution characteristics such as the EFF discriminator, and the EM scattering property such as the PPR discriminator. The synthesized judgment effectively eliminates the false alarms.

  6. A Novel Approach Based on Data Redundancy for Feature Extraction of EEG Signals.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Kamel, Nidal; Hussain, Muhammad

    2016-03-01

    Feature extraction and classification for electroencephalogram (EEG) in medical applications is a challenging task. The EEG signals produce a huge amount of redundant data or repeating information. This redundancy causes potential hurdles in EEG analysis. Hence, we propose to use this redundant information of EEG as a feature to discriminate and classify different EEG datasets. In this study, we have proposed a JPEG2000 based approach for computing data redundancy from multi-channels EEG signals and have used the redundancy as a feature for classification of EEG signals by applying support vector machine, multi-layer perceptron and k-nearest neighbors classifiers. The approach is validated on three EEG datasets and achieved high accuracy rate (95-99 %) in the classification. Dataset-1 includes the EEG signals recorded during fluid intelligence test, dataset-2 consists of EEG signals recorded during memory recall test, and dataset-3 has epileptic seizure and non-seizure EEG. The findings demonstrate that the approach has the ability to extract robust feature and classify the EEG signals in various applications including clinical as well as normal EEG patterns. PMID:26613724

  7. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  8. Discriminative Feature Extraction via Multivariate Linear Regression for SSVEP-Based BCI.

    PubMed

    Wang, Haiqiang; Zhang, Yu; Waytowich, Nicholas R; Krusienski, Dean J; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2016-05-01

    Many of the most widely accepted methods for reliable detection of steady-state visual evoked potentials (SSVEPs) in the electroencephalogram (EEG) utilize canonical correlation analysis (CCA). CCA uses pure sine and cosine reference templates with frequencies corresponding to the visual stimulation frequencies. These generic reference templates may not optimally reflect the natural SSVEP features obscured by the background EEG. This paper introduces a new approach that utilizes spatio-temporal feature extraction with multivariate linear regression (MLR) to learn discriminative SSVEP features for improving the detection accuracy. MLR is implemented on dimensionality-reduced EEG training data and a constructed label matrix to find optimally discriminative subspaces. Experimental results show that the proposed MLR method significantly outperforms CCA as well as several other competing methods for SSVEP detection, especially for time windows shorter than 1 second. This demonstrates that the MLR method is a promising new approach for achieving improved real-time performance of SSVEP-BCIs. PMID:26812728

  9. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models

    PubMed Central

    Chaddad, Ahmad

    2015-01-01

    This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. PMID:26136774

  10. Non-linear feature extraction from HRV signal for mortality prediction of ICU cardiovascular patient.

    PubMed

    Karimi Moridani, Mohammad; Setarehdan, Seyed Kamaledin; Motie Nasrabadi, Ali; Hajinasrollah, Esmaeil

    2016-04-01

    Intensive care unit (ICU) patients are at risk of in-ICU morbidities and mortality, making specific systems for identifying at-risk patients a necessity for improving clinical care. This study presents a new method for predicting in-hospital mortality using heart rate variability (HRV) collected from the times of a patient's ICU stay. In this paper, a HRV time series processing based method is proposed for mortality prediction of ICU cardiovascular patients. HRV signals were obtained measuring R-R time intervals. A novel method, named return map, is then developed that reveals useful information from the HRV time series. This study also proposed several features that can be extracted from the return map, including the angle between two vectors, the area of triangles formed by successive points, shortest distance to 45° line and their various combinations. Finally, a thresholding technique is proposed to extract the risk period and to predict mortality. The data used to evaluate the proposed algorithm obtained from 80 cardiovascular ICU patients, from the first 48 h of the first ICU stay of 40 males and 40 females. This study showed that the angle feature has on average a sensitivity of 87.5% (with 12 false alarms), the area feature has on average a sensitivity of 89.58% (with 10 false alarms), the shortest distance feature has on average a sensitivity of 85.42% (with 14 false alarms) and, finally, the combined feature has on average a sensitivity of 92.71% (with seven false alarms). The results showed that the last half an hour before the patient's death is very informative for diagnosing the patient's condition and to save his/her life. These results confirm that it is possible to predict mortality based on the features introduced in this paper, relying on the variations of the HRV dynamic characteristics. PMID:27028609

  11. Extraction of spatial features in hyperspectral images based on the analysis of differential attribute profiles

    NASA Astrophysics Data System (ADS)

    Falco, Nicola; Benediktsson, Jon A.; Bruzzone, Lorenzo

    2013-10-01

    The new generation of hyperspectral sensors can provide images with a high spectral and spatial resolution. Recent improvements in mathematical morphology have developed new techniques such as the Attribute Profiles (APs) and the Extended Attribute Profiles (EAPs) that can effectively model the spatial information in remote sensing images. The main drawbacks of these techniques is the selection of the optimal range of values related to the family of criteria adopted to each filter step, and the high dimensionality of the profiles, which results in a very large number of features and therefore provoking the Hughes phenomenon. In this work, we focus on addressing the dimensionality issue, which leads to an highly intrinsic information redundancy, proposing a novel strategy for extracting spatial information from hyperspectral images based on the analysis of the Differential Attribute Profiles (DAPs). A DAP is generated by computing the derivative of the AP; it shows at each level the residual between two adjacent levels of the AP. By analyzing the multilevel behavior of the DAP, it is possible to extract geometrical features corresponding to the structures within the scene at different scales. Our proposed approach consists of two steps: 1) a homogeneity measurement is used to identify the level L in which a given pixel belongs to a region with a physical meaning; 2) the geometrical information of the extracted regions is fused into a single map considering their level L previously identified. The process is repeated for different attributes building a reduced EAP, whose dimensionality is much lower with respect to the original EAP ones. Experiments carried out on the hyperspectral data set of Pavia University area show the effectiveness of the proposed method in extracting spatial features related to the physical structures presented in the scene, achieving higher classification accuracy with respect to the ones reported in the state-of-the-art literature

  12. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  13. Fault feature extraction and enhancement of rolling element bearing in varying speed condition

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Zhang, W.; Qin, Z. Y.; Chu, F. L.

    2016-08-01

    In engineering applications, the variability of load usually varies the shaft speed, which further degrades the efficacy of the diagnostic method based on the hypothesis of constant speed analysis. Therefore, the investigation of the diagnostic method suitable for the varying speed condition is significant for the bearing fault diagnosis. In this instance, a novel fault feature extraction and enhancement procedure was proposed by the combination of the iterative envelope analysis and a low pass filtering operation in this paper. At first, based on the analytical model of the collected vibration signal, the envelope signal was theoretically calculated and the iterative envelope analysis was improved for the varying speed condition. Then, a feature enhancement procedure was performed by applying a low pass filter on the temporal envelope obtained by the iterative envelope analysis. Finally, the temporal envelope signal was transformed to the angular domain by the computed order tracking and the fault feature was extracted on the squared envelope spectrum. Simulations and experiments were used to validate the efficacy of the theoretical analysis and proposed procedure. It is shown that the computed order tracking method is recommended to be applied on the envelope of the signal in order to avoid the energy spreading and amplitude distortion. Compared with the feature enhancement method performed by the fast kurtogram and corresponding optimal band pass filtering, the proposed method can efficiently extract the fault character in the varying speed condition with less amplitude attenuation. Furthermore, do not involve the center frequency estimation, the proposed method is more concise for engineering applications.

  14. Time series analysis and feature extraction techniques for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Overbey, Lucas A.

    Recently, advances in sensing and sensing methodologies have led to the deployment of multiple sensor arrays on structures for structural health monitoring (SHM) applications. Appropriate feature extraction, detection, and classification methods based on measurements obtained from these sensor networks are vital to the SHM paradigm. This dissertation focuses on a multi-input/multi-output approach to novel data processing procedures to produce detailed information about the integrity of a structure in near real-time. The studies employ nonlinear time series analysis techniques to extract three different types of features for damage diagnostics: namely, nonlinear prediction error, transfer entropy, and the generalized interdependence. These features form reliable measures of generalized correlations between multiple measurements to capture aspects of the dynamics related to the presence of damage. Several analyses are conducted on each of these features. Specifically, variations of nonlinear prediction error are introduced, analyzed, and validated, including the use of a stochastic excitation to augment generality, introduction of local state-space models for sensitivity enhancement, and the employment of comparisons between multiple measurements for localization capability. A modification and enhancement to transfer entropy is created and validated for improved sensitivity. In addition, a thorough analysis of the effects of variability to transfer entropy estimation is made. The generalized interdependence is introduced into the literature and validated as an effective measure of damage presence, extent, and location. These features are validated on a multi-degree-of-freedom dynamic oscillator and several different frame experiments. The evaluated features are then fed into four different classification schemes to obtain a concurrent set of outputs that categorize the integrity of the structure, e.g. the presence, extent, location, and type of damage, taking

  15. Feature extraction from geoeye-1 stereo pairs data for forested area

    NASA Astrophysics Data System (ADS)

    Stournara, P.; Georgiadis, C.; Kaimaris, D.; Tsakiri-Strati, M.; Tsioukas, V.

    2015-04-01

    Remote sensing facilitates the extraction of information for earth's surface through its capability of acquiring images covering large areas and the availability of commercial software for their processing. The aim of this study is the feature extraction from three Geoeye-1 stereo pairs for forested area. The study area is located in central mountainous forested peninsula of Chalkidiki, in northern Greece. Dominant forest tree species of the site are oak (Quercus conferta), beech (Fagus moesiaca), black pine (Pinus nigra) and calabrian pine (Pinus brutia). Very High Resolution (VHR) Geoeye-1 stereo pair satellite images were utilized in panchromatic and multispectral mode. The panchromatic mode was employed for Digital Surface Model (DSM) generation and its evaluation. In this study the High Pass Filter (HPF) data fusion technique was applied between panchromatic and multispectral mode for acquiring a new image with the benefits of both contracting images. Because of the fact that the feature extraction was attempted in a forested region, NDVI index and Tasseled Cap transformation were applied in the fused images' evaluation procedure. Optical assessment was also applied. The accuracy of the generated DSM and the evaluation results of the fused images were remarkable.

  16. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  17. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  18. Detecting abnormality in optic nerve head images using a feature extraction analysis

    PubMed Central

    Zhu, Haogang; Poostchi, Ali; Vernon, Stephen A; Crabb, David P

    2014-01-01

    Imaging and evaluation of the optic nerve head (ONH) plays an essential part in the detection and clinical management of glaucoma. The morphological characteristics of ONHs vary greatly from person to person and this variability means it is difficult to quantify them in a standardized way. We developed and evaluated a feature extraction approach using shift-invariant wavelet packet and kernel principal component analysis to quantify the shape features in ONH images acquired by scanning laser ophthalmoscopy (Heidelberg Retina Tomograph [HRT]). The methods were developed and tested on 1996 eyes from three different clinical centers. A shape abnormality score (SAS) was developed from extracted features using a Gaussian process to identify glaucomatous abnormality. SAS can be used as a diagnostic index to quantify the overall likelihood of ONH abnormality. Maps showing areas of likely abnormality within the ONH were also derived. Diagnostic performance of the technique, as estimated by ROC analysis, was significantly better than the classification tools currently used in the HRT software – the technique offers the additional advantage of working with all images and is fully automated. PMID:25071960

  19. Comparative assessment of feature extraction methods for visual odometry in wireless capsule endoscopy.

    PubMed

    Spyrou, Evaggelos; Iakovidis, Dimitris K; Niafas, Stavros; Koulaouzidis, Anastasios

    2015-10-01

    Wireless capsule endoscopy (WCE) enables the non-invasive examination of the gastrointestinal (GI) tract by a swallowable device equipped with a miniature camera. Accurate localization of the capsule in the GI tract enables accurate localization of abnormalities for medical interventions such as biopsy and polyp resection; therefore, the optimization of the localization outcome is important. Current approaches to endoscopic capsule localization are mainly based on external sensors and transit time estimations. Recently, we demonstrated the feasibility of capsule localization based-entirely-on visual features, without the use of external sensors. This technique relies on a motion estimation algorithm that enables measurements of the distance and the rotation of the capsule from the acquired video frames. Towards the determination of an optimal visual feature extraction technique for capsule motion estimation, an extensive comparative assessment of several state-of-the-art techniques, using a publicly available dataset, is presented. The results show that the minimization of the localization error is possible at the cost of computational efficiency. A localization error of approximately one order of magnitude higher than the minimal one can be considered as compromise for the use of current computationally efficient feature extraction techniques. PMID:26073184

  20. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    NASA Astrophysics Data System (ADS)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  1. Geological structures from televiewer logs of GT-2, Fenton Hill, New Mexico: Part 1, Feature extraction

    SciTech Connect

    Burns, K.L.

    1987-07-01

    Patterns in reflected sonic intensity recognized during examination of televiewer logs of basement gneiss at the Hot Dry Rock Site, Fenton Hill, New Mexico, are due to geological fractures and foliations and to incipient breakouts. These features are obscured by artifacts caused by wellbore ellipticity, tool off-centering, and tool oscillations. An interactive method, developed for extraction of the structural features (fractures and foliations), uses human perception as a pattern detector and a chi-square test of harmonic form as a pattern discriminator. From imagery of GT-2, 733 structures were recovered. The acceptance rate of the discriminator was 54%. Despite these positive results, the general conclusion of this study is that intensity-mode imagery from Fenton Hill is not directly invertible for geological information because of the complexity of the televiewer imaging process. Developing a forward model of the intensity-imaging process, or converting to caliper-mode imagery, or doing both, will be necessary for high-fidelity feature extraction from televiewer data.

  2. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  3. Plant identification through images: Using feature extraction of key points on leaf contours1

    PubMed Central

    Gwo, Chih-Ying; Wei, Chia-Hung

    2013-01-01

    • Premise of the study: Because plant identification demands extensive knowledge and complex terminologies, even professional botanists require significant time in the field for mastery of the subject. As plant leaves are normally regarded as possessing useful characteristics for species identification, leaf recognition through images can be considered an important research issue for plant recognition. • Methods: This study proposes a feature extraction method for leaf contours, which describes the lines between the centroid and each contour point on an image. A length histogram is created to represent the distribution of distances in the leaf contour. Thereafter, a classifier is applied from a statistical model to calculate the matching score of the template and query leaf. • Results: The experimental results show that the top value achieves 92.7% and the first two values can achieve 97.3%. In the scale invariance test, those 45 correlation coefficients fall between the minimal value of 0.98611 and the maximal value of 0.99992. Like the scale invariance test, the rotation invariance test performed 45 comparison sets. The correlation coefficients range between 0.98071 and 0.99988. • Discussion: This study shows that the extracted features from leaf images are invariant to scale and rotation because those features are close to positive correlation in terms of coefficient correlation. Moreover, the experimental results indicated that the proposed method outperforms two other methods, Zernike moments and curvature scale space. PMID:25202493

  4. VLSI processor with a configurable processing element array for balanced feature extraction in high-resolution images

    NASA Astrophysics Data System (ADS)

    Zhu, Hongbo; Shibata, Tadashi

    2014-01-01

    A VLSI processor employing a configurable processing element array (PEA) is developed for a newly proposed balanced feature extraction algorithm. In the algorithm, the input image is divided into square regions and the number of features is determined by noise effect analysis in each region. Regions of different sizes are used according to the resolutions and contents of input images. Therefore, inside the PEA, processing elements are hierarchically grouped for feature extraction in regions of different sizes. A proof-of-concept chip is fabricated using a 0.18 µm CMOS technology with a 32 × 32 PEA. From measurement results, a speed of 7.5 kfps is achieved for feature extraction in 128 × 128 pixel regions when operating the chip at 45 MHz, and a speed of 55 fps is also achieved for feature extraction in 1920 × 1080 pixel images.

  5. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  6. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and

  7. Feature extraction from time domain acoustic signatures of weapons systems fire

    NASA Astrophysics Data System (ADS)

    Yang, Christine; Goldman, Geoffrey H.

    2014-06-01

    The U.S. Army is interested in developing algorithms to classify weapons systems fire based on their acoustic signatures. To support this effort, an algorithm was developed to extract features from acoustic signatures of weapons systems fire and applied to over 1300 signatures. The algorithm filtered the data using standard techniques then estimated the amplitude and time of the first five peaks and troughs and the location of the zero crossing in the waveform. The results were stored in Excel spreadsheets. The results are being used to develop and test acoustic classifier algorithms.

  8. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  9. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  10. Texture Feature Extraction and Analysis for Polyp Differentiation via Computed Tomography Colonography.

    PubMed

    Hu, Yifan; Liang, Zhengrong; Song, Bowen; Han, Hao; Pickhardt, Perry J; Zhu, Wei; Duan, Chaijie; Zhang, Hao; Barish, Matthew A; Lascarides, Chris E

    2016-06-01

    Image textures in computed tomography colonography (CTC) have great potential for differentiating non-neoplastic from neoplastic polyps and thus can advance the current CTC detection-only paradigm to a new level with diagnostic capability. However, image textures are frequently compromised, particularly in low-dose CT imaging. Furthermore, texture feature extraction may vary, depending on the polyp spatial orientation variation, resulting in variable results. To address these issues, this study proposes an adaptive approach to extract and analyze the texture features for polyp differentiation. Firstly, derivative (e.g. gradient and curvature) operations are performed on the CT intensity image to amplify the textures with adequate noise control. Then Haralick co-occurrence matrix (CM) is used to calculate texture measures along each of the 13 directions (defined by the first and second order image voxel neighbors) through the polyp volume in the intensity, gradient and curvature images. Instead of taking the mean and range of each CM measure over the 13 directions as the so-called Haralick texture features, Karhunen-Loeve transform is performed to map the 13 directions into an orthogonal coordinate system so that the resulted texture features are less dependent on the polyp orientation variation. These simple ideas for amplifying textures and stabilizing spatial variation demonstrated a significant impact for the differentiating task by experiments using 384 polyp datasets, of which 52 are non-neoplastic polyps and the rest are neoplastic polyps. By the merit of area under the curve of receiver operating characteristic, the innovative ideas achieved differentiation capability of 0.8016, indicating the CTC diagnostic feasibility. PMID:26800530

  11. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  12. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  13. ECG feature extraction based on the bandwidth properties of variational mode decomposition.

    PubMed

    Mert, Ahmet

    2016-04-01

    It is a difficult process to detect abnormal heart beats, known as arrhythmia, in long-term ECG recording. Thus, computer-aided diagnosis systems have become a supportive tool for helping physicians improve the diagnostic accuracy of heartbeat detection. This paper explores the bandwidth properties of the modes obtained using variational mode decomposition (VMD) to classify arrhythmia electrocardiogram (ECG) beats. VMD is an enhanced version of the empirical mode decomposition (EMD) algorithm for analyzing non-linear and non-stationary signals. It decomposes the signal into a set of band-limited oscillations called modes. ECG signals from the MIT-BIH arrhythmia database are decomposed using VMD, and the amplitude modulation bandwidth B AM, the frequency modulation bandwidth B FM and the total bandwidth B of the modes are used as feature vectors to detect heartbeats such as normal (N), premature ventricular contraction (V), left bundle branch block (L), right bundle branch block (R), paced beat (P) and atrial premature beat (A). Bandwidth estimations based on the instantaneous frequency (IF) and amplitude (IA) spectra of the modes indicate that the proposed VMD-based features have sufficient class discrimination capability regarding ECG beats. Moreover, the extracted features using the bandwidths (B AM, B FM and B) of four modes are used to evaluate the diagnostic accuracy rates of several classifiers such as the k-nearest neighbor classifier (k-NN), the decision tree (DT), the artificial neural network (ANN), the bagged decision tree (BDT), the AdaBoost decision tree (ABDT) and random sub-spaced k-NN (RSNN) for N, R, L, V, P, and A beats. The performance of the proposed VMD-based feature extraction with a BDT classifier has accuracy rates of 99.06%, 99.00%, 99.40%, 99.51%, 98.72%, 98.71%, and 99.02% for overall, N-, R-, L-, V-, P-, and A-type ECG beats, respectively. PMID:26987295

  14. Uneven Reassembly of Tense, Telicity and Discourse Features in L2 Acquisition of the Chinese "shì…de" Cleft Construction by Adult English Speakers

    ERIC Educational Resources Information Center

    Mai, Ziyin; Yuan, Boping

    2016-01-01

    This article reports an empirical study investigating L2 acquisition of the Mandarin Chinese "shì…de" cleft construction by adult English-speaking learners within the framework of the Feature Reassembly Hypothesis (Lardiere, 2009). A Sentence Completion task, an interpretation task, two Acceptability Judgement tasks, and a felicity…

  15. Lamb wave feature extraction using discrete wavelet transformation and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Ghodsi, Mojtaba; Ziaiefar, Hamidreza; Amiryan, Milad; Honarvar, Farhang; Hojjat, Yousef; Mahmoudi, Mehdi; Al-Yahmadi, Amur; Bahadur, Issam

    2016-04-01

    In this research, a new method is presented for eliciting the proper features for recognizing and classifying the kinds of the defects by guided ultrasonic waves. After applying suitable preprocessing, the suggested method extracts the base frequency band from the received signals by discrete wavelet transform and discrete Fourier transform. This frequency band can be used as a distinctive feature of ultrasonic signals in different defects. Principal Component Analysis with improving this feature and decreasing extra data managed to improve classification. In this study, ultrasonic test with A0 mode lamb wave is used and is appropriated to reduce the difficulties around the problem. The defects under analysis included corrosion, crack and local thickness reduction. The last defect is caused by electro discharge machining (EDM). The results of the classification by optimized Neural Network depicts that the presented method can differentiate different defects with 95% precision and thus, it is a strong and efficient method. Moreover, comparing the elicited features for corrosion and local thickness reduction and also the results of the two's classification clarifies that modeling the corrosion procedure by local thickness reduction which was previously common, is not an appropriate method and the signals received from the two defects are different from each other.

  16. Learning object location predictors with boosting and grammar-guided feature extraction

    SciTech Connect

    Eads, Damian Ryan; Rosten, Edward; Helmbold, David

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  17. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios. PMID:26080384

  18. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  19. Multiple feature extraction and classification of electroencephalograph signal for Alzheimers' with spectrum and bispectrum

    NASA Astrophysics Data System (ADS)

    Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile

    2015-01-01

    In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of

  20. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  1. Feature extraction and recognition of epileptiform activity in EEG by combining PCA with ApEn.

    PubMed

    Wang, Chunmei; Zou, Junzhong; Zhang, Jian; Wang, Min; Wang, Rubin

    2010-09-01

    This paper proposes a new method for feature extraction and recognition of epileptiform activity in EEG signals. The method improves feature extraction speed of epileptiform activity without reducing recognition rate. Firstly, Principal component analysis (PCA) is applied to the original EEG for dimension reduction and to the decorrelation of epileptic EEG and normal EEG. Then discrete wavelet transform (DWT) combined with approximate entropy (ApEn) is performed on epileptic EEG and normal EEG, respectively. At last, Neyman-Pearson criteria are applied to classify epileptic EEG and normal ones. The main procedure is that the principle component of EEG after PCA is decomposed into several sub-band signals using DWT, and ApEn algorithm is applied to the sub-band signals at different wavelet scales. Distinct difference is found between the ApEn values of epileptic and normal EEG. The method allows recognition of epileptiform activities and discriminates them from the normal EEG. The algorithm performs well at epileptiform activity recognition in the clinic EEG data and offers a flexible tool that is intended to be generalized to the simultaneous recognition of many waveforms in EEG. PMID:21886676

  2. Feature extraction and recognition of epileptiform activity in EEG by combining PCA with ApEn

    PubMed Central

    Zou, Junzhong; Zhang, Jian; Wang, Min; Wang, Rubin

    2010-01-01

    This paper proposes a new method for feature extraction and recognition of epileptiform activity in EEG signals. The method improves feature extraction speed of epileptiform activity without reducing recognition rate. Firstly, Principal component analysis (PCA) is applied to the original EEG for dimension reduction and to the decorrelation of epileptic EEG and normal EEG. Then discrete wavelet transform (DWT) combined with approximate entropy (ApEn) is performed on epileptic EEG and normal EEG, respectively. At last, Neyman–Pearson criteria are applied to classify epileptic EEG and normal ones. The main procedure is that the principle component of EEG after PCA is decomposed into several sub-band signals using DWT, and ApEn algorithm is applied to the sub-band signals at different wavelet scales. Distinct difference is found between the ApEn values of epileptic and normal EEG. The method allows recognition of epileptiform activities and discriminates them from the normal EEG. The algorithm performs well at epileptiform activity recognition in the clinic EEG data and offers a flexible tool that is intended to be generalized to the simultaneous recognition of many waveforms in EEG. PMID:21886676

  3. Feature extraction and object recognition in multi-modal forward looking imagery

    NASA Astrophysics Data System (ADS)

    Greenwood, G.; Blakely, S.; Schartman, D.; Calhoun, B.; Keller, J. M.; Ton, T.; Wong, D.; Soumekh, M.

    2010-04-01

    The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) recently tested an explosive-hazards detection vehicle that combines a pulsed FLGPR with a visible-spectrum color camera. Additionally, NVESD tested a human-in-the-loop multi-camera system with the same goal in mind. It contains wide field-of-view color and infrared cameras as well as zoomable narrow field-of-view versions of those modalities. Even though they are separate vehicles, having information from both systems offers great potential for information fusion. Based on previous work at the University of Missouri, we are not only able to register the UTM-based positions of the FLGPR to the color image sequences on the first system, but we can register these locations to corresponding image frames of all sensors on the human-in-the-loop platform. This paper presents our approach to first generate libraries of multi-sensor information across these platforms. Subsequently, research is performed in feature extraction and recognition algorithms based on the multi-sensor signatures. Our goal is to tailor specific algorithms to recognize and eliminate different categories of clutter and to be able to identify particular explosive hazards. We demonstrate our library creation, feature extraction and object recognition results on a large data collection at a US Army test site.

  4. Automated extraction of fine features of kinetochore microtubules and plus-ends from electron tomography volume.

    PubMed

    Jiang, Ming; Ji, Qiang; McEwen, Bruce F

    2006-07-01

    Kinetochore microtubules (KMTs) and the associated plus-ends have been areas of intense investigation in both cell biology and molecular medicine. Though electron tomography opens up new possibilities in understanding their function by imaging their high-resolution structures, the interpretation of the acquired data remains an obstacle because of the complex and cluttered cellular environment. As a result, practical segmentation of the electron tomography data has been dominated by manual operation, which is time consuming and subjective. In this paper, we propose a model-based automated approach to extracting KMTs and the associated plus-ends with a coarse-to-fine scale scheme consisting of volume preprocessing, microtubule segmentation and plus-end tracing. In volume preprocessing, we first apply an anisotropic invariant wavelet transform and a tube-enhancing filter to enhance the microtubules at coarse level for localization. This is followed with a surface-enhancing filter to accentuate the fine microtubule boundary features. The microtubule body is then segmented using a modified active shape model method. Starting from the segmented microtubule body, the plus-ends are extracted with a probabilistic tracing method improved with rectangular window based feature detection and the integration of multiple cues. Experimental results demonstrate that our automated method produces results comparable to manual segmentation but using only a fraction of the manual segmentation time. PMID:16830922

  5. A comparison of feature extraction methods for Sentinel-1 images: Gabor and Weber transforms

    NASA Astrophysics Data System (ADS)

    Stan, Mihaela; Popescu, Anca; Stoichescu, Dan Alexandru

    2015-10-01

    The purpose of this paper is to compare the performance of two feature extraction methods when applied on high resolution Synthetic Aperture Radar (SAR) images acquired with the new ESA mission SENTINEL-1 (S-1). The feature extraction methods were previously tested on high and very high resolution SAR data (imaged by TerraSAR-X) and had a good performance in discriminating between a relevant numbers of land cover classes (tens of classes). Based on the available spatial resolution (10x10m) of S-1 Interferometric Wide (IW) Ground Range Detected (GRD) images the number of detectable classes is much lower. Moreover, the overall heterogeneity of the images is much lower as compared to the high resolution data, the number of observable details is smaller, and this favors the choice of a smaller window size for the analysis: between 10 and 50 pixels in range and azimuth. The size of the analysis window ensures the consistency with the previous results reported in the literature in very high resolution data (as the size on the ground is comparable and thus the number of contributing objects in the window is similar). The performance of Gabor filters and the Weber Local Descriptor (WLD) was investigated in a twofold approach: first the descriptors were computed directly over the IW GRD images and secondly on the sub-sampled version of the same data (in order to determine the effect of the speckle correlation on the overall class detection probability).

  6. Feature selection for the identification of antitumor compounds in the alcohol total extracts of Curcuma longa.

    PubMed

    Jiang, Jian-Lan; Li, Zi-Dan; Zhang, Huan; Li, Yan; Zhang, Xiao-Hang; Yuan, Yi-fu; Yuan, Ying-jin

    2014-08-01

    Antitumor activity has been reported for turmeric, the dried rhizome of Curcuma longa. This study proposes a new feature selection method for the identification of the antitumor compounds in turmeric total extracts. The chemical composition of turmeric total extracts was analyzed by gas chromatography-mass spectrometry (21 ingredients) and high-performance liquid chromatography-mass spectrometry (22 ingredients), and their cytotoxicity was detected through an 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay against HeLa cells. A support vector machine for regression and a generalized regression neural network were used to research the composition-activity relationship and were later combined with the mean impact value to identify the antitumor compounds. The results showed that six volatile constituents (three terpenes and three ketones) and seven nonvolatile constituents (five curcuminoids and two unknown ingredients) with high absolute mean impact values exhibited a significant correlation with the cytotoxicity against HeLa cells. With the exception of the two unknown ingredients, the identified 11 constituents have been reported to exhibit cytotoxicity. This finding indicates that the feature selection method may be a supplementary tool for the identification of active compounds from herbs. PMID:25144675

  7. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    PubMed Central

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  8. Fault feature extraction of rolling bearing based on an improved cyclical spectrum density method

    NASA Astrophysics Data System (ADS)

    Li, Min; Yang, Jianhong; Wang, Xiaojing

    2015-11-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method, experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  9. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  10. A Feature Extraction Method for Vibration Signal of Bearing Incipient Degradation

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli; Guo, Liang; Li, Dan; Wen, Juan

    2016-06-01

    Detection of incipient degradation demands extracting sensitive features accurately when signal-to-noise ratio (SNR) is very poor, which appears in most industrial environments. Vibration signals of rolling bearings are widely used for bearing fault diagnosis. In this paper, we propose a feature extraction method that combines Blind Source Separation (BSS) and Spectral Kurtosis (SK) to separate independent noise sources. Normal, and incipient fault signals from vibration tests of rolling bearings are processed. We studied 16 groups of vibration signals (which all display an increase in kurtosis) of incipient degradation after they are processed by a BSS filter. Compared with conventional kurtosis, theoretical studies of SK trends show that the SK levels vary with frequencies and some experimental studies show that SK trends of measured vibration signals of bearings vary with the amount and level of impulses in both vibration and noise signals due to bearing faults. It is found that the peak values of SK increase when vibration signals of incipient faults are processed by a BSS filter. This pre-processing by a BSS filter makes SK more sensitive to impulses caused by performance degradation of bearings.

  11. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    PubMed Central

    Iannelli, Gianni Cristian; Lisini, Gianni; Dell'Acqua, Fabio; Feitosa, Raul Queiroz; da Costa, Gilson Alexandre Ostwald Pedro; Gamba, Paolo

    2014-01-01

    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. PMID:25271564

  12. A feature extraction method based on information theory for fault diagnosis of reciprocating machinery.

    PubMed

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  13. Urban area extent extraction in spaceborne HR and VHR data using multi-resolution features.

    PubMed

    Iannelli, Gianni Cristian; Lisini, Gianni; Dell'Acqua, Fabio; Feitosa, Raul Queiroz; da Costa, Gilson Alexandre Ostwald Pedro; Gamba, Paolo

    2014-01-01

    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an "urban area" is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, "urban area" extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. PMID:25271564

  14. A Joint Time-Frequency and Matrix Decomposition Feature Extraction Methodology for Pathological Voice Classification

    NASA Astrophysics Data System (ADS)

    Ghoraani, Behnaz; Krishnan, Sridhar

    2009-12-01

    The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.

  15. Memory-efficient architecture for hysteresis thresholding and object feature extraction.

    PubMed

    Najjar, Mayssaa A; Karlapudi, Swetha; Bayoumi, Magdy A

    2011-12-01

    Hysteresis thresholding is a method that offers enhanced object detection. Due to its recursive nature, it is time consuming and requires a lot of memory resources. This makes it avoided in streaming processors with limited memory. We propose two versions of a memory-efficient and fast architecture for hysteresis thresholding: a high-accuracy pixel-based architecture and a faster block-based one at the expense of some loss in the accuracy. Both designs couple thresholding with connected component analysis and feature extraction in a single pass over the image. Unlike queue-based techniques, the proposed scheme treats candidate pixels almost as foreground until objects complete; a decision is then made to keep or discard these pixels. This allows processing on the fly, thus avoiding additional passes for handling candidate pixels and extracting object features. Moreover, labels are reused so only one row of compact labels is buffered. Both architectures are implemented in MATLAB and VHDL. Simulation results on a set of real and synthetic images show that the execution speed can attain an average increase up to 24× for the pixel-based and 52× for the block-based when compared to state-of-the-art techniques. The memory requirements are also drastically reduced by about 99%. PMID:21521668

  16. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    SciTech Connect

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  17. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    PubMed

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  18. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  19. Feature Extraction using Wavelet Transform for Multi-class Fault Detection of Induction Motor

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, P.; Konar, P.

    2014-01-01

    In this paper the theoretical aspects and feature extraction capabilities of continuous wavelet transform (CWT) and discrete wavelet transform (DWT) are experimentally verified from the point of view of fault diagnosis of induction motors. Vertical frame vibration signal is analyzed to develop a wavelet based multi-class fault detection scheme. The redundant and high dimensionality information of CWT makes it computationally in-efficient. Using greedy-search feature selection technique (Greedy-CWT) the redundancy is eliminated to a great extent and found much superior to the widely used DWT technique, even in presence of high level of noise. The results are verified using MLP, SVM, RBF classifiers. The feature selection technique has enabled determination of the most relevant CWT scales and corresponding coefficients. Thus, the inherent limitations of CWT like proper selection of scales and redundant information are eliminated. In the present investigation `db8' is found as the best mother wavelet, due to its long period and higher number of vanishing moments, for detection of motor faults.

  20. Cerebral Glioma Grading Using Bayesian Network with Features Extracted from Multiple Modalities of Magnetic Resonance Imaging

    PubMed Central

    Wang, Huiting; Liu, Renyuan; Zhang, Xin; Li, Ming; Yang, Yongbo; Yan, Jing; Niu, Fengnan; Tian, Chuanshuai; Wang, Kun; Yu, Haiping; Chen, Weibo; Wan, Suiren; Sun, Yu; Zhang, Bing

    2016-01-01

    Many modalities of magnetic resonance imaging (MRI) have been confirmed to be of great diagnostic value in glioma grading. Contrast enhanced T1-weighted imaging allows the recognition of blood-brain barrier breakdown. Perfusion weighted imaging and MR spectroscopic imaging enable the quantitative measurement of perfusion parameters and metabolic alterations respectively. These modalities can potentially improve the grading process in glioma if combined properly. In this study, Bayesian Network, which is a powerful and flexible method for probabilistic analysis under uncertainty, is used to combine features extracted from contrast enhanced T1-weighted imaging, perfusion weighted imaging and MR spectroscopic imaging. The networks were constructed using K2 algorithm along with manual determination and distribution parameters learned using maximum likelihood estimation. The grading performance was evaluated in a leave-one-out analysis, achieving an overall grading accuracy of 92.86% and an area under the curve of 0.9577 in the receiver operating characteristic analysis given all available features observed in the total 56 patients. Results and discussions show that Bayesian Network is promising in combining features from multiple modalities of MRI for improved grading performance. PMID:27077923

  1. A threshold method for coastal line feature extraction from optical satellite imagery

    NASA Astrophysics Data System (ADS)

    Zoran, L. F. V.; Golovanov, C. Ionescu; Zoran, M. A.

    2007-10-01

    The coastal zone of world is under increasing stress due to development of industries, trade and commerce, tourism and resultant human population growth and migration, and deteriorating water quality. Satellite imagery is used for mapping of coastal zone ecosystems as well as to assess the extent and alteration in land cover/land use in coastal ecosystem. Beside anthropogenic activities, episodic events, such as storms, floods, induce certain changes or accelerate the process of change, so in order to conserve the coastal ecosystems and habitats is an urgent need to define coastal line and its spatio-temporal changes. Coastlines have never been stable in terms of their long term and short term positions. Coastal line is a simple but important type of feature in remote sensed images. In remote sensing have been proposed many valid approaches for automatically identifying of this feature, of which the accuracy and speed is the most important. The aim of the paper is to develop a threshold-based morphological approach for coastline feature extraction from optical remote sensing satellite images (LandsatTM 5, ETM 7 + and IKONOS) and to apply it for Romanian Black Sea coastal zone for period of 20 years (1985-2005).

  2. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    NASA Astrophysics Data System (ADS)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  3. A Distinguishing Arterial Pulse Waves Approach by Using Image Processing and Feature Extraction Technique.

    PubMed

    Chen, Hsing-Chung; Kuo, Shyi-Shiun; Sun, Shen-Ching; Chang, Chia-Hui

    2016-10-01

    Traditional Chinese Medicine (TCM) is based on five main types of diagnoses methods consisting of inspection, auscultation, olfaction, inquiry, and palpation. The most important one is palpation also called pulse diagnosis which is to measure wrist artery pulse by doctor's fingers for detecting patient's health state. In this paper, it is carried out by using a specialized pulse measuring instrument to classify one's pulse type. The measured pulse waves (MPWs) were segmented into the arterial pulse wave curve (APWC) by image proposing method. The slopes and periods among four specific points on the APWC were taken to be the pulse features. Three algorithms are proposed in this paper, which could extract these features from the APWCs and compared their differences between each of them to the average feature matrix, individually. These results show that the method proposed in this study is superior and more accurate than the previous studies. The proposed method could significantly save doctors a large amount of time, increase accuracy and decrease data volume. PMID:27562483

  4. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 1: time domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    The paper presents an application of the gamma-absorption method to study a gas-liquid two-phase flow in a horizontal pipeline. In the tests on laboratory installation two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals have been used. The experimental set-up allows recording of stochastic signals, which describe instantaneous content of the stream in the particular cross-section of the flow mixture. The analyses of these signals by statistical methods allow to determine the mean velocity of the gas phase. Meanwhile, the selected features of signals provided by the absorption set, can be applied to recognition of the structure of the flow. In this work such three structures of air-water flow as: plug, bubble, and transitional plug - bubble one were considered. The recorded raw signals were analyzed in time domain and several features were extracted. It was found that following features of signals as the mean, standard deviation, root mean square (RMS), variance and 4th moment are most useful to recognize the structure of the flow.

  5. Characterization of lunar soils through spectral features extraction in the NIR

    NASA Astrophysics Data System (ADS)

    Mall, U.; Wöhler, C.; Grumpe, A.; Bugiolacchi, R.; Bhatt, M.

    2014-11-01

    Recently launched hyper-spectral instrumentation with ever-increasing data return capabilities deliver the remote-sensing data to characterize planetary soils with increased precision, thus generating the need to classify the returned data in an efficient way for further specialized analysis and detection of features of interest. This paper investigates how lunar near-infrared spectra generated by the SIR-2 on Chandrayaan-1 can be classified into distinctive groups of similar spectra with automated feature extraction algorithms. As common spectral parameters for the SIR-2 spectra, two absorption features near 1300 nm and 2000 and their characteristics provide 10 variables which are used in two different unsupervised clustering methods, the mean-shift clustering algorithm and the recently developed graph cut-based clustering algorithm by Müller et al. (2012). The spectra used in this paper were taken on the lunar near side centering around the Imbrium region of the Moon. More than 100,000 spectra were analyzed.

  6. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots

  7. Test Plan for Solvent Extraction Data Acquisition to Support Modeling Efforts

    SciTech Connect

    Veronica Rutledge; Kristi Christensen; Troy Garn; Jack Law

    2010-12-01

    This testing will support NEAMS SafeSep Modeling efforts related to droplet simulation in liquid-liquid extraction equipment. Physical characteristic determinations will be completed for the fluids being used in the experiment including viscosity, density, surface tension, distribution coefficients, and diffusion coefficients. Then, experiments will be carried out to provide data for comparison to the simulation’s calculation of mass transfer coefficients. Experiments will be conducted with solutions used in the TRansUranic EXtraction (TRUEX) process extraction section. The TRUEX process was chosen since it is one solvent extraction system currently proposed for the separation of actinides and lanthanides from used nuclear fuel, it is diffusion limited, testing can be performed using non radioactive lanthanides to evaluate mass transfer. The extraction section involves transfer of one or more lanthanide species from an aqueous solution to an organic solvent. Single droplets rising by buoyancy will be studied first. Droplet size and number of species transferred will be varied independently to provide mass transfer coefficients as a function of each variable. An apparatus has been designed specifically for these experiments. In order to get more accurate measurements of droplet size, contact time, time of droplet formation, and possibly droplet breakup and coalescence, a high speed camera will be utilized for these experiments. Other potential experiments include examining the effects of jetted droplets and shear flow on the mass transfer coefficients.

  8. Extraction of Airport Features from High Resolution Satellite Imagery for Design and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, Chris; Qiu, You-Liang; Jensen, John R.; Schill, Steven R.; Floyd, Mike

    2001-01-01

    The LPA Group, consisting of 17 offices located throughout the eastern and central United States is an architectural, engineering and planning firm specializing in the development of Airports, Roads and Bridges. The primary focus of this ARC project is concerned with assisting their aviation specialists who work in the areas of Airport Planning, Airfield Design, Landside Design, Terminal Building Planning and design, and various other construction services. The LPA Group wanted to test the utility of high-resolution commercial satellite imagery for the purpose of extracting airport elevation features in the glide path areas surrounding the Columbia Metropolitan Airport. By incorporating remote sensing techniques into their airport planning process, LPA wanted to investigate whether or not it is possible to save time and money while achieving the equivalent accuracy as traditional planning methods. The Affiliate Research Center (ARC) at the University of South Carolina investigated the use of remotely sensed imagery for the extraction of feature elevations in the glide path zone. A stereo pair of IKONOS panchromatic satellite images, which has a spatial resolution of 1 x 1 m, was used to determine elevations of aviation obstructions such as buildings, trees, towers and fence-lines. A validation dataset was provided by the LPA Group to assess the accuracy of the measurements derived from the IKONOS imagery. The initial goal of this project was to test the utility of IKONOS imagery in feature extraction using ERDAS Stereo Analyst. This goal was never achieved due to problems with ERDAS software support of the IKONOS sensor model and the unavailability of imperative sensor model information from Space Imaging. The obstacles encountered in this project pertaining to ERDAS Stereo Analyst and IKONOS imagery will be reviewed in more detail later in this report. As a result of the technical difficulties with Stereo Analyst, ERDAS OrthoBASE was used to derive aviation

  9. Extraction of Surface-Related Features in a Recurrent Model of V1-V2 Interactions

    PubMed Central

    Weidenbacher, Ulrich; Neumann, Heiko

    2009-01-01

    Background Humans can effortlessly segment surfaces and objects from two-dimensional (2D) images that are projections of the 3D world. The projection from 3D to 2D leads partially to occlusions of surfaces depending on their position in depth and on viewpoint. One way for the human visual system to infer monocular depth cues could be to extract and interpret occlusions. It has been suggested that the perception of contour junctions, in particular T-junctions, may be used as cue for occlusion of opaque surfaces. Furthermore, X-junctions could be used to signal occlusion of transparent surfaces. Methodology/Principal Findings In this contribution, we propose a neural model that suggests how surface-related cues for occlusion can be extracted from a 2D luminance image. The approach is based on feedforward and feedback mechanisms found in visual cortical areas V1 and V2. In a first step, contours are completed over time by generating groupings of like-oriented contrasts. Few iterations of feedforward and feedback processing lead to a stable representation of completed contours and at the same time to a suppression of image noise. In a second step, contour junctions are localized and read out from the distributed representation of boundary groupings. Moreover, surface-related junctions are made explicit such that they are evaluated to interact as to generate surface-segmentations in static images. In addition, we compare our extracted junction signals with a standard computer vision approach for junction detection to demonstrate that our approach outperforms simple feedforward computation-based approaches. Conclusions/Significance A model is proposed that uses feedforward and feedback mechanisms to combine contextually relevant features in order to generate consistent boundary groupings of surfaces. Perceptually important junction configurations are robustly extracted from neural representations to signal cues for occlusion and transparency. Unlike previous proposals

  10. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  11. Texture feature extraction and analysis for polyp differentiation via computed tomography colonography

    PubMed Central

    Hu, Yifan; Song, Bowen; Han, Hao; Pickhardt, Perry J.; Zhu, Wei; Duan, Chaijie; Zhang, Hao; Barish, Matthew A.; Lascarides, Chris E.

    2016-01-01

    Image textures in computed tomography colonography (CTC) have great potential for differentiating non-neoplastic from neoplastic polyps and thus can advance the current CTC detection-only paradigm to a new level toward optimal polyp management to prevent the deadly colorectal cancer. However, image textures are frequently compromised due to noise smoothing and other error-correction operations in most CT image reconstructions. Furthermore, because of polyp orientation variation in patient space, texture features extracted in that space can vary accordingly, resulting in variable results. To address these issues, this study proposes an adaptive approach to extract and analyze the texture features for polyp differentiation. Firstly, derivative operations are performed on the CT intensity image to amplify the textures, e.g. in the 1st order derivative (gradient) and 2nd order derivative (curvature) images, with adequate noise control. Then the Haralick co-occurrence matrix (CM) is used to calculate texture measures along each of the 13 directions (defined by the 1st and 2nd order image voxel neighbors) through the polyp volume in the intensity, gradient and curvature images. Instead of taking the mean and range of each CM measure over the 13 directions as the so-called Haralick texture features, the Karhunen-Loeve transform is performed to map the 13 directions into an orthogonal coordinate system where all the CM measures are projected onto the new coordinates so that the resulted texture features are less dependent on the polyp spatial orientation variation. While the ideas for amplifying textures and stabilizing spatial variation are simple, their impacts are significant for the task of differentiating non-neoplastic from neoplastic polyps as demonstrated by experiments using 384 polyp datasets, of which 52 are non-neoplastic polyps and the rest are neoplastic polyps. By the merit of area under the curve of receiver operating characteristic, the innovative ideas

  12. A versatile multivariate image analysis pipeline reveals features of Xenopus extract spindles.

    PubMed

    Grenfell, Andrew W; Strzelecka, Magdalena; Crowder, Marina E; Helmke, Kara J; Schlaitz, Anne-Lore; Heald, Rebecca

    2016-04-11

    Imaging datasets are rich in quantitative information. However, few cell biologists possess the tools necessary to analyze them. Here, we present a large dataset ofXenopusextract spindle images together with an analysis pipeline designed to assess spindle morphology across a range of experimental conditions. Our analysis of different spindle types illustrates how kinetochore microtubules amplify spindle microtubule density. Extract mixing experiments reveal that some spindle features titrate, while others undergo switch-like transitions, and multivariate analysis shows the pleiotropic morphological effects of modulating the levels of TPX2, a key spindle assembly factor. We also apply our pipeline to analyze nuclear morphology in human cell culture, showing the general utility of the segmentation approach. Our analyses provide new insight into the diversity of spindle types and suggest areas for future study. The approaches outlined can be applied by other researchers studying spindle morphology and adapted with minimal modification to other experimental systems. PMID:27044897

  13. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method. PMID:25227014

  14. An Interactive Technique for Cartographic Feature Extraction from Aerial and Satellite Image Sensors

    PubMed Central

    Kicherer, Stefan; Malpica, Jose A.; Alonso, Maria C.

    2008-01-01

    In this paper, an interactive technique for extracting cartographic features from aerial and spatial images is presented. The method is essentially an interactive method of image region segmentation based on pixel grey level and texture information. The underlying segmentation method is seeded region growing. The criterion for growing regions is based on both texture and grey level, where texture is quantified using co-occurrence matrices. The Kullback distance is utilised with co-occurrence matrices in order to describe the image texture, then the Theory of Evidence is applied to merge the information coming from texture and grey level image from the RGB bands. Several results from aerial and spatial images that support the technique are presented

  15. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  16. Multiresolution-fractal feature extraction and tumor detection: analytical modeling and implementation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Parra, Carlos

    2003-11-01

    We propose formal analytical models for identification of tumors in medical images based on the hypothesis that the tumors have a fractal (self-similar) growth behavior. Therefore, the images of these tumors may be characterized as Fractional Brownian motion (fBm) processes with a fractal dimension (D) that is distinctly different than that of the image of the surrounding tissue. In order to extract the desired features that delineate different tissues in a MR image, we study multiresolution signal decomposition and its relation to fBm. The fBm has proven successful to modeling a variety of physical phenomena and non-stationary processes, such as medical images, that share essential properties such as self-similarity, scale invariance and fractal dimension (D). We have developed the theoretical framework that combines wavelet analysis with multiresolution fBm to compute D.

  17. Feature extraction and classification of sEMG signals applied to a virtual hand prosthesis.

    PubMed

    Tello, Richard M G; Bastos-Filho, Teodiano; Frizera-Neto, Anselmo; Arjunan, Sridhar; Kumar, Dinesh K

    2013-01-01

    This paper presents the classification of motor tasks, using surface electromyography (sEMG) to control a virtual prosthetic hand for rehabilitation of amputees. Two types of classifiers are compared: k-Nearest Neighbor (k-NN) and Bayesian (Discriminant Analysis). Motor tasks are divided into four groups correlated. The volunteers were people without amputation and several analyzes of each of the signals were conducted. The online simulations use the sliding window technique and for feature extraction RMS (Root Mean Square), VAR (Variance) and WL (Waveform Length) values were used. A model is proposed for reclassification using cross-validation in order to validate the classification, and a visualization in Sammon Maps is provided in order to observe the separation of the classes for each set of motor tasks. Finally, the proposed method can be implemented in a computer interface providing a visual feedback through an virtual hand prosthetic developed in Visual C++ and MATLAB commands. PMID:24110086

  18. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli.

    PubMed

    Kumar, Neeraj; Mutha, Pratik K

    2016-03-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function-perception-are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the "natural" sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  19. Spatiotemporal contrast enhancement and feature extraction in the bat auditory midbrain and cortex.

    PubMed

    Hoffmann, Susanne; Warmbold, Alexander; Wiegrebe, Lutz; Firzlaff, Uwe

    2013-09-01

    Navigating on the wing in complete darkness is a challenging task for echolocating bats. It requires the detailed analysis of spatial and temporal information gained through echolocation. Thus neural encoding of spatiotemporal echo information is a major function in the bat auditory system. In this study we presented echoes in virtual acoustic space and used a reverse-correlation technique to investigate the spatiotemporal response characteristics of units in the inferior colliculus (IC) and the auditory cortex (AC) of the bat Phyllostomus discolor. Spatiotemporal response maps (STRMs) of IC units revealed an organization of suppressive and excitatory regions that provided pronounced contrast enhancement along both the time and azimuth axes. Most IC units showed either spatially centralized short-latency excitation spatiotemporally imbedded in strong suppression, or the opposite, i.e., central short-latency suppression imbedded in excitation. This complementary arrangement of excitation and suppression was very rarely seen in AC units. In contrast, STRMs in the AC revealed much less suppression, sharper spatiotemporal tuning, and often a special spatiotemporal arrangement of two excitatory regions. Temporal separation of excitatory regions ranged up to 25 ms and was thus in the range of temporal delays occurring in target ranging in bats in a natural situation. Our data indicate that spatiotemporal processing of echo information in the bat auditory midbrain and cortex serves very different purposes: Whereas the spatiotemporal contrast enhancement provided by the IC contributes to echo-feature extraction, the AC reflects the result of this processing in terms of a high selectivity and task-oriented recombination of the extracted features. PMID:23785132

  20. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  1. A rooftop extraction method using color feature, height map information and road information

    NASA Astrophysics Data System (ADS)

    Xiang, Yongzhou; Sun, Ying; Li, Chao

    2012-11-01

    This paper presents a new method for rooftop extraction that integrates color features, height map, and road information in a level set based segmentation framework. The proposed method consists of two steps: rooftop detection and rooftop segmentation. The first step requires the user to provide a few example rooftops from which the color distribution of rooftop pixels is estimated. For better robustness, we obtain superpixels of the input satellite image, and then classify each superpixel as rooftop or non-rooftop based on its color features. Using the height map, we can remove those detected rooftop candidates with small height values. Level set based segmentation of each detected rooftop is then performed based on color and height information, by incorporating a shape-prior term that allows the evolving contour to take on the desired rectangle shape. This requires performing rectangle fitting to the evolving contour, which can be guided by the road information to improve the fitting accuracy. The performance of the proposed method has been evaluated on a satellite image of 1 km×1 km in area, with a resolution of one meter per pixel. The method achieves detection rate of 88.0% and false alarm rate of 9.5%. The average Dice's coefficient over 433 detected rooftops is 73.4%. These results demonstrate that by integrating the height map in rooftop detection and by incorporating road information and rectangle fitting in a level set based segmentation framework, the proposed method provides an effective and useful tool for rooftop extraction from satellite images.

  2. Extracting drug-drug interactions from literature using a rich feature-based linear kernel approach.

    PubMed

    Kim, Sun; Liu, Haibin; Yeganova, Lana; Wilbur, W John

    2015-06-01

    Identifying unknown drug interactions is of great benefit in the early detection of adverse drug reactions. Despite existence of several resources for drug-drug interaction (DDI) information, the wealth of such information is buried in a body of unstructured medical text which is growing exponentially. This calls for developing text mining techniques for identifying DDIs. The state-of-the-art DDI extraction methods use Support Vector Machines (SVMs) with non-linear composite kernels to explore diverse contexts in literature. While computationally less expensive, linear kernel-based systems have not achieved a comparable performance in DDI extraction tasks. In this work, we propose an efficient and scalable system using a linear kernel to identify DDI information. The proposed approach consists of two steps: identifying DDIs and assigning one of four different DDI types to the predicted drug pairs. We demonstrate that when equipped with a rich set of lexical and syntactic features, a linear SVM classifier is able to achieve a competitive performance in detecting DDIs. In addition, the one-against-one strategy proves vital for addressing an imbalance issue in DDI type classification. Applied to the DDIExtraction 2013 corpus, our system achieves an F1 score of 0.670, as compared to 0.651 and 0.609 reported by the top two participating teams in the DDIExtraction 2013 challenge, both based on non-linear kernel methods. PMID:25796456

  3. Landsat TM image feature extraction and analysis of algal bloom in Taihu Lake

    NASA Astrophysics Data System (ADS)

    Wei, Yuchun; Chen, Wei

    2008-04-01

    This study developed an approach to the extraction and characterization of blue-green algal blooms of the study area Taihu Lake of China with the Landsat 5 TM imagery. Spectral feature of typical material within Taihu Lake were first compared, and the most sensitive spectral bands to blue-green algal blooms determined. Eight spectral indices were then designed using multiple TM spectral bands in order to maximize spectral contrast of different materials. The spectral curves describing the variation of reflectance at individual bands with the spectral indices were plotted, and the TM imagery was segmented using as thresholds the step-jumping points of the reflectance curves. The results indicate that the proposed multiple band-based spectral index NDAI2 (NDAI2 = (B4-B1)*(B5-B3)/(B4+B5+B1+B3) performed better than traditional vegetation indices NDVI and RVI in the extraction of blue-green algal information. In addition, this study indicates that the image segmentation using the points where reflectance has a sudden change resulted in a robust result, as well as a good applicability.

  4. Extracting drug-drug interactions from literature using a rich feature-based linear kernel approach

    PubMed Central

    Kim, Sun; Yeganova, Lana; Wilbur, W. John

    2015-01-01

    Identifying unknown drug interactions is of great benefit in the early detection of adverse drug reactions. Despite existence of several resources for drug-drug interaction (DDI) information, the wealth of such information is buried in a body of unstructured medical text which is growing exponentially. This calls for developing text mining techniques for identifying DDIs. The state-of-the-art DDI extraction methods use Support Vector Machines (SVMs) with non-linear composite kernels to explore diverse contexts in literature. While computationally less expensive, linear kernel-based systems have not achieved a comparable performance in DDI extraction tasks. In this work, we propose an efficient and scalable system using a linear kernel to identify DDI information. The proposed approach consists of two steps: identifying DDIs and assigning one of four different DDI types to the predicted drug pairs. We demonstrate that when equipped with a rich set of lexical and syntactic features, a linear SVM classifier is able to achieve a competitive performance in detecting DDIs. In addition, the one-against-one strategy proves vital for addressing an imbalance issue in DDI type classification. Applied to the DDIExtraction 2013 corpus, our system achieves an F1 score of 0.670, as compared to 0.651 and 0.609 reported by the top two participating teams in the DDIExtraction 2013 challenge, both based on non-linear kernel methods. PMID:25796456

  5. Adventitious sounds identification and extraction using temporal-spectral dominance-based features.

    PubMed

    Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook

    2011-11-01

    Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method. PMID:21712152

  6. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    PubMed Central

    Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  7. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    PubMed

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  8. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  9. Improved Measurement of Blood Pressure by Extraction of Characteristic Features from the Cuff Oscillometric Waveform

    PubMed Central

    Lim, Pooi Khoon; Ng, Siew-Cheok; Jassim, Wissam A.; Redmond, Stephen J.; Zilany, Mohammad; Avolio, Alberto; Lim, Einly; Tan, Maw Pin; Lovell, Nigel H.

    2015-01-01

    We present a novel approach to improve the estimation of systolic (SBP) and diastolic blood pressure (DBP) from oscillometric waveform data using variable characteristic ratios between SBP and DBP with mean arterial pressure (MAP). This was verified in 25 healthy subjects, aged 28 ± 5 years. The multiple linear regression (MLR) and support vector regression (SVR) models were used to examine the relationship between the SBP and the DBP ratio with ten features extracted from the oscillometric waveform envelope (OWE). An automatic algorithm based on relative changes in the cuff pressure and neighbouring oscillometric pulses was proposed to remove outlier points caused by movement artifacts. Substantial reduction in the mean and standard deviation of the blood pressure estimation errors were obtained upon artifact removal. Using the sequential forward floating selection (SFFS) approach, we were able to achieve a significant reduction in the mean and standard deviation of differences between the estimated SBP values and the reference scoring (MLR: mean ± SD = −0.3 ± 5.8 mmHg; SVR and −0.6 ± 5.4 mmHg) with only two features, i.e., Ratio2 and Area3, as compared to the conventional maximum amplitude algorithm (MAA) method (mean ± SD = −1.6 ± 8.6 mmHg). Comparing the performance of both MLR and SVR models, our results showed that the MLR model was able to achieve comparable performance to that of the SVR model despite its simplicity. PMID:26087370

  10. Image feature extraction in encrypted domain with privacy-preserving SIFT.

    PubMed

    Hsu, Chao-Yung; Lu, Chun-Shien; Pei, Soo-Chang

    2012-11-01

    Privacy has received considerable attention but is still largely ignored in the multimedia community. Consider a cloud computing scenario where the server is resource-abundant, and is capable of finishing the designated tasks. It is envisioned that secure media applications with privacy preservation will be treated seriously. In view of the fact that scale-invariant feature transform (SIFT) has been widely adopted in various fields, this paper is the first to target the importance of privacy-preserving SIFT (PPSIFT) and to address the problem of secure SIFT feature extraction and representation in the encrypted domain. As all of the operations in SIFT must be moved to the encrypted domain, we propose a privacy-preserving realization of the SIFT method based on homomorphic encryption. We show through the security analysis based on the discrete logarithm problem and RSA that PPSIFT is secure against ciphertext only attack and known plaintext attack. Experimental results obtained from different case studies demonstrate that the proposed homomorphic encryption-based privacy-preserving SIFT performs comparably to the original SIFT and that our method is useful in SIFT-based privacy-preserving applications. PMID:22711774

  11. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases. PMID:22431526

  12. Feature extraction from light-scatter patterns of Listeria colonies for identification and classification.

    PubMed

    Bayraktar, Bulent; Banada, Padmapriya P; Hirleman, E Daniel; Bhunia, Arun K; Robinson, J Paul; Rajwa, Bartek

    2006-01-01

    Bacterial contamination by Listeria monocytogenes not only puts the public at risk, but also is costly for the food-processing industry. Traditional biochemical methods for pathogen identification require complicated sample preparation for reliable results. Optical scattering technology has been used for identification of bacterial cells in suspension, but with only limited success. Therefore, to improve the efficacy of the identification process using our novel imaging approach, we analyze bacterial colonies grown on solid surfaces. The work presented here demonstrates an application of computer-vision and pattern-recognition techniques to classify scatter patterns formed by Listeria colonies. Bacterial colonies are analyzed with a laser scatterometer. Features of circular scatter patterns formed by bacterial colonies illuminated by laser light are characterized using Zernike moment invariants. Principal component analysis and hierarchical clustering are performed on the results of feature extraction. Classification using linear discriminant analysis, partial least squares, and neural networks is capable of separating different strains of Listeria with a low error rate. The demonstrated system is also able to determine automatically the pathogenicity of bacteria on the basis of colony scatter patterns. We conclude that the obtained results are encouraging, and strongly suggest the feasibility of image-based biodetection systems. PMID:16822056

  13. ClusTrack: Feature Extraction and Similarity Measures for Clustering of Genome-Wide Data Sets

    PubMed Central

    Rydbeck, Halfdan; Sandve, Geir Kjetil; Ferkingstad, Egil; Simovski, Boris; Rye, Morten; Hovig, Eivind

    2015-01-01

    Clustering is a popular technique for explorative analysis of data, as it can reveal subgroupings and similarities between data in an unsupervised manner. While clustering is routinely applied to gene expression data, there is a lack of appropriate general methodology for clustering of sequence-level genomic and epigenomic data, e.g. ChIP-based data. We here introduce a general methodology for clustering data sets of coordinates relative to a genome assembly, i.e. genomic tracks. By defining appropriate feature extraction approaches and similarity measures, we allow biologically meaningful clustering to be performed for genomic tracks using standard clustering algorithms. An implementation of the methodology is provided through a tool, ClusTrack, which allows fine-tuned clustering analyses to be specified through a web-based interface. We apply our methods to the clustering of occupancy of the H3K4me1 histone modification in samples from a range of different cell types. The majority of samples form meaningful subclusters, confirming that the definitions of features and similarity capture biological, rather than technical, variation between the genomic tracks. Input data and results are available, and can be reproduced, through a Galaxy Pages document at http://hyperbrowser.uio.no/hb/u/hb-superuser/p/clustrack. The clustering functionality is available as a Galaxy tool, under the menu option "Specialized analyzis of tracks", and the submenu option "Cluster tracks based on genome level similarity", at the Genomic HyperBrowser server: http://hyperbrowser.uio.no/hb/. PMID:25879845

  14. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  15. Understanding the effects of pre-processing on extracted signal features from gait accelerometry signals.

    PubMed

    Millecamps, Alexandre; Lowry, Kristin A; Brach, Jennifer S; Perera, Subashan; Redfern, Mark S; Sejdić, Ervin

    2015-07-01

    Gait accelerometry is an important approach for gait assessment. Previous contributions have adopted various pre-processing approaches for gait accelerometry signals, but none have thoroughly investigated the effects of such pre-processing operations on the obtained results. Therefore, this paper investigated the influence of pre-processing operations on signal features extracted from gait accelerometry signals. These signals were collected from 35 participants aged over 65years: 14 of them were healthy controls (HC), 10 had Parkinson׳s disease (PD) and 11 had peripheral neuropathy (PN). The participants walked on a treadmill at preferred speed. Signal features in time, frequency and time-frequency domains were computed for both raw and pre-processed signals. The pre-processing stage consisted of applying tilt correction and denoising operations to acquired signals. We first examined the effects of these operations separately, followed by the investigation of their joint effects. Several important observations were made based on the obtained results. First, the denoising operation alone had almost no effects in comparison to the trends observed in the raw data. Second, the tilt correction affected the reported results to a certain degree, which could lead to a better discrimination between groups. Third, the combination of the two pre-processing operations yielded similar trends as the tilt correction alone. These results indicated that while gait accelerometry is a valuable approach for the gait assessment, one has to carefully adopt any pre-processing steps as they alter the observed findings. PMID:25935124

  16. Understanding the effects of pre-processing on extracted signal features from gait accelerometry signals

    PubMed Central

    Millecamps, Alexandre; Brach, Jennifer S.; Lowry, Kristin A.; Perera, Subashan; Redfern, Mark S.

    2015-01-01

    Gait accelerometry is an important approach for gait assessment. Previous contributions have adopted various pre-processing approaches for gait accelerometry signals, but none have thoroughly investigated the effects of such pre-processing operations on the obtained results. Therefore, this paper investigated the influence of pre-processing operations on signal features extracted from gait accelerometry signals. These signals were collected from 35 participants aged over 65 years-old: 14 of them were healthy controls (HC), 10 had Parkinson’s disease (PD) and 11 had peripheral neuropathy (PN). The participants walked on a treadmill at preferred speed. Signal features in time, frequency and time-frequency domains were computed for both raw and pre-processed signals. The pre-processing stage consisted of applying tilt correction and de-noising operations to acquired signals. We first examined the effects of these operations separately, followed by the investigation of their joint effects. Several important observations were made based on the obtained results. First, the denoising operation alone had almost no effects in comparison to the trends observed in the raw data. Second, the tilt correction affected the reported results to a certain degree, which could lead to a better discrimination between groups. Third, the combination of the two pre-processing operations yielded similar trends as the tilt correction alone. These results indicated that while gait accelerometry is a valuable approach for the gait assessment, one has to carefully adopt any pre-processing steps as they alter the observed findings. PMID:25935124

  17. Image Analysis, Modeling, Enhancement, Restoration, Feature Extraction and Their Applications in Nondestructive Evaluation and Radio Astronomy.

    NASA Astrophysics Data System (ADS)

    Zheng, Yi.

    The principal topic of this dissertation is the development and application of signal and image processing to Nondestructive Evaluation (NDE) and radio astronomy. The dissertation consists of nine papers published or submitted for publication. Each of them has a specific and unique topic related to signal processing or image processing in NDE or radio astronomy. Those topics are listed in the following. (1) Time series analysis and modeling of Very Large Array (VLA) phase data. (2) Image analysis, feature extraction and various applied enhancement methods for industrial NDE X-ray radiographic images. (3) Enhancing NDE radiographic X-ray images by adaptive regional Kalman filtering. (4) Robotic image segmentation, modeling, and restoration with a rule based expert system. (5) Industrial NDE radiographic X-ray image modeling and Kalman filtering considering signal-dependent colored noise. (6) Computational study of Kalman filtering VLA phase data and its computational performance on a supercomputer. (7) A practical and fast maximum entropy deconvolution method for de-blurring industrial NDE X-ray and infrared images. (8) Local feature enhancement of synthetic radio images by adaptive Kalman filtering. (9) A new technique for correcting phase data of a synthetic -aperture antenna array.

  18. A novel feature extracting method of QRS complex classification for mobile ECG signals

    NASA Astrophysics Data System (ADS)

    Zhu, Lingyun; Wang, Dong; Huang, Xianying; Wang, Yue

    2007-12-01

    The conventional classification parameters of QRS complex suffer from larger activity rang of patients and lower signal to noise ratio in mobile cardiac telemonitoring system and can not meet the identification needs of ECG signal. Based on individual sinus heart rhythm template built with mobile ECG signals in time window, we present semblance index to extract the classification features of QRS complex precisely and expeditiously. Relative approximation r2 and absolute error r3 are used as estimating parameters of semblance between testing QRS complex and template. The evaluate parameters corresponding to QRS width and types are demonstrated to choose the proper index. The results show that 99.99 percent of the QRS complex for sinus and superventricular ECG signals can be distinguished through r2 but its average accurate ratio is only 46.16%. More than 97.84 percent of QRS complexes are identified using r3 but its accurate ratio to the sinus and superventricular is not better than r2. By the feature parameter of width, only 42.65 percent of QRS complexes are classified correctly, but its accurate ratio to the ventricular is superior to r2. To combine the respective superiority of three parameters, a nonlinear weighing computation of QRS width, r2 and r3 is introduced and the total classification accuracy up to 99.48% by combing indexes.

  19. Application of non-linear system model updating using feature extraction and parameter effects analysis

    SciTech Connect

    Schultze, J.; Hemez, F.

    2000-11-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the classic update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the familiar linear updating paradigm of utilizing the eigen-parameters or FRF's to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model. Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric. Finally an investigation of the model to replicate the measured response variation is examined.

  20. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  1. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  2. Functional source separation and hand cortical representation for a brain–computer interface feature extraction

    PubMed Central

    Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo

    2007-01-01

    A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This

  3. Extraction of time and frequency features from grip force rates during dexterous manipulation.

    PubMed

    Mojtahedi, Keivan; Fu, Qiushi; Santello, Marco

    2015-05-01

    The time course of grip force from object contact to onset of manipulation has been extensively studied to gain insight into the underlying control mechanisms. Of particular interest to the motor neuroscience and clinical communities is the phenomenon of bell-shaped grip force rate (GFR) that has been interpreted as indicative of feedforward force control. However, this feature has not been assessed quantitatively. Furthermore, the time course of grip force may contain additional features that could provide insight into sensorimotor control processes. In this study, we addressed these questions by validating and applying two computational approaches to extract features from GFR in humans: 1) fitting a Gaussian function to GFR and quantifying the goodness of the fit [root-mean-square error, (RMSE)]; and 2) continuous wavelet transform (CWT), where we assessed the correlation of the GFR signal with a Mexican Hat function. Experiment 1 consisted of a classic pseudorandomized presentation of object mass (light or heavy), where grip forces developed to lift a mass heavier than expected are known to exhibit corrective responses. For Experiment 2, we applied our two techniques to analyze grip force exerted for manipulating an inverted T-shaped object whose center of mass was changed across blocks of consecutive trials. For both experiments, subjects were asked to grasp the object at either predetermined or self-selected grasp locations ("constrained" and "unconstrained" task, respectively). Experiment 1 successfully validated the use of RMSE and CWT as they correctly distinguished trials with versus without force corrective responses. RMSE and CWT also revealed that grip force is characterized by more feedback-driven corrections when grasping at self-selected contact points. Future work will examine the application of our analytical approaches to a broader range of tasks, e.g., assessment of recovery of sensorimotor function following clinical intervention, interlimb

  4. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    SciTech Connect

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  5. A theory of automatic parameter selection for feature extraction with application to feature-based multisensor image registration.

    PubMed

    DelMarco, Stephen P; Tom, Victor; Webb, Helen F

    2007-11-01

    This paper generalizes the previously developed automated edge-detection parameter selection algorithm of Yitzhaky and Peli. We generalize the approach to arbitrary multidimensional, continuous or discrete parameter spaces, and feature spaces. This generalization enables use of the parameter selection approach with more general image features, for use in feature-based multisensor image registration applications. We investigate the problem of selecting a suitable parameter space sampling density in the automated parameter selection algorithm. A real-valued sensitivity measure is developed which characterizes the effect of parameter space sampling on feature set variability. Closed-form solutions of the sensitivity measure for special feature set relationships are derived. We conduct an analysis of the convergence properties of the sensitivity measure as a function of increasing parameter space sampling density. For certain parameter space sampling sequence types, closed-form expressions for the sensitivity measure limit values are presented. We discuss an approach to parameter space sampling density selection which uses the sensitivity measure convergence behavior. We provide numerical results indicating the utility of the sensitivity measure for selecting suitable parameter values. PMID:17990750

  6. A data-driven feature extraction framework for predicting the severity of condition of congestive heart failure patients.

    PubMed

    Sideris, Costas; Alshurafa, Nabil; Pourhomayoun, Mohammad; Shahmohammadi, Farhad; Samy, Lauren; Sarrafzadeh, Majid

    2015-08-01

    In this paper, we propose a novel methodology for utilizing disease diagnostic information to predict severity of condition for Congestive Heart Failure (CHF) patients. Our methodology relies on a novel, clustering-based, feature extraction framework using disease diagnostic information. To reduce the dimensionality we identify disease clusters using cooccurence frequencies. We then utilize these clusters as features to predict patient severity of condition. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 patients. We compare our cluster-based feature set with another that incorporates the Charlson comorbidity score as a feature and demonstrate an accuracy improvement of up to 14% in the predictability of the severity of condition. PMID:26736808

  7. Unsupervised feature construction and knowledge extraction from genome-wide assays of breast cancer with denoising autoencoders.

    PubMed

    Tan, Jie; Ung, Matthew; Cheng, Chao; Greene, Casey S

    2015-01-01

    Big data bring new opportunities for methods that efficiently summarize and automatically extract knowledge from such compendia. While both supervised learning algorithms and unsupervised clustering algorithms have been successfully applied to biological data, they are either dependent on known biology or limited to discerning the most significant signals in the data. Here we present denoising autoencoders (DAs), which employ a data-defined learning objective independent of known biology, as a method to identify and extract complex patterns from genomic data. We evaluate the performance of DAs by applying them to a large collection of breast cancer gene expression data. Results show that DAs successfully construct features that contain both clinical and molecular information. There are features that represent tumor or normal samples, estrogen receptor (ER) status, and molecular subtypes. Features constructed by the autoencoder generalize to an independent dataset collected using a distinct experimental platform. By integrating data from ENCODE for feature interpretation, we discover a feature representing ER status through association with key transcription factors in breast cancer. We also identify a feature highly predictive of patient survival and it is enriched by FOXM1 signaling pathway. The features constructed by DAs are often bimodally distributed with one peak near zero and another near one, which facilitates discretization. In summary, we demonstrate that DAs effectively extract key biological principles from gene expression data and summarize them into constructed features with convenient properties. PMID:25592575

  8. UNSUPERVISED FEATURE CONSTRUCTION AND KNOWLEDGE EXTRACTION FROM GENOME-WIDE ASSAYS OF BREAST CANCER WITH DENOISING AUTOENCODERS

    PubMed Central

    TAN, JIE; UNG, MATTHEW; CHENG, CHAO; GREENE, CASEY S

    2014-01-01

    Big data bring new opportunities for methods that efficiently summarize and automatically extract knowledge from such compendia. While both supervised learning algorithms and unsupervised clustering algorithms have been successfully applied to biological data, they are either dependent on known biology or limited to discerning the most significant signals in the data. Here we present denoising autoencoders (DAs), which employ a data-defined learning objective independent of known biology, as a method to identify and extract complex patterns from genomic data. We evaluate the performance of DAs by applying them to a large collection of breast cancer gene expression data. Results show that DAs successfully construct features that contain both clinical and molecular information. There are features that represent tumor or normal samples, estrogen receptor (ER) status, and molecular subtypes. Features constructed by the autoencoder generalize to an independent dataset collected using a distinct experimental platform. By integrating data from ENCODE for feature interpretation, we discover a feature representing ER status through association with key transcription factors in breast cancer. We also identify a feature highly predictive of patient survival and it is enriched by FOXM1 signaling pathway. The features constructed by DAs are often bimodally distributed with one peak near zero and another near one, which facilitates discretization. In summary, we demonstrate that DAs effectively extract key biological principles from gene expression data and summarize them into constructed features with convenient properties. PMID:25592575

  9. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  10. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    NASA Astrophysics Data System (ADS)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  11. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  12. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  13. Application of the largest Lyapunov exponent algorithm for feature extraction in low speed slew bearing condition monitoring

    NASA Astrophysics Data System (ADS)

    Caesarendra, Wahyu; Kosasih, Buyung; Tieu, Anh Kiet; Moodie, Craig A. S.

    2015-01-01

    This paper presents a new application of the largest Lyapunov exponent (LLE) algorithm for feature extraction method in low speed slew bearing condition monitoring. The LLE algorithm is employed to measure the degree of non-linearity of the vibration signal which is not easily monitored by existing methods. The method is able to detect changes in the condition of the bearing and demonstrates better tracking of the progressive deterioration of the bearing during the 139 measurement days than comparable methods such as the time domain feature methods based on root mean square (RMS), skewness and kurtosis extraction from the raw vibration signal and also better than extracting similar features from selected intrinsic mode functions (IMFs) of the empirical mode decomposition (EMD) result. The application of the method is demonstrated with laboratory slew bearing vibration data and industrial bearing data from a coal bridge reclaimer used in a local steel mill.

  14. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  15. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  16. Sparsity-enabled signal decomposition using tunable Q-factor wavelet transform for fault feature extraction of gearbox

    NASA Astrophysics Data System (ADS)

    Cai, Gaigai; Chen, Xuefeng; He, Zhengjia

    2013-12-01

    Localized faults in gearboxes tend to result in periodic shocks and thus arouse periodic responses in vibration signals. Feature extraction has always been a key problem for localized fault diagnosis. This paper proposes a new fault feature extraction technique for gearboxes by using sparsity-enabled signal decomposition method. The sparsity-enabled signal decomposition method separates signals based on the oscillatory behavior of the signal rather than the frequency or scale. Thus, the fault feature can be nonlinearly extracted from vibration signals. During the implementation of the proposed method, tunable Q-factor wavelet transform, for which the Q-factor can be easily specified, is adopted to represent vibration signals in a sparse way, and then morphological component analysis (MCA) is employed to estimate and separate the distinct components. The corresponding optimization problem of MCA is solved by the split augmented Lagrangian shrinkage algorithm (SALSA). With the proposed method, vibration signals of the faulty gearbox can be nonlinearly decomposed into high-oscillatory component and low-oscillatory component which is the fault feature of gearboxes. To evaluate the performance of the proposed method, this paper investigates the effect of two parameters pertinent to MCA and SALSA: the Lagrange multiplier and the penalty parameter. The effectiveness of the proposed method is verified by both the simulated and practical gearbox vibration signals. Results show the proposed method outperforms empirical mode decomposition and spectral kurtosis in extracting fault features of gearboxes.

  17. Color edges extraction using statistical features and automatic threshold technique: application to the breast cancer cells

    PubMed Central

    2014-01-01

    Background Color image segmentation has been so far applied in many areas; hence, recently many different techniques have been developed and proposed. In the medical imaging area, the image segmentation may be helpful to provide assistance to doctor in order to follow-up the disease of a certain patient from the breast cancer processed images. The main objective of this work is to rebuild and also to enhance each cell from the three component images provided by an input image. Indeed, from an initial segmentation obtained using the statistical features and histogram threshold techniques, the resulting segmentation may represent accurately the non complete and pasted cells and enhance them. This allows real help to doctors, and consequently, these cells become clear and easy to be counted. Methods A novel method for color edges extraction based on statistical features and automatic threshold is presented. The traditional edge detector, based on the first and the second order neighborhood, describing the relationship between the current pixel and its neighbors, is extended to the statistical domain. Hence, color edges in an image are obtained by combining the statistical features and the automatic threshold techniques. Finally, on the obtained color edges with specific primitive color, a combination rule is used to integrate the edge results over the three color components. Results Breast cancer cell images were used to evaluate the performance of the proposed method both quantitatively and qualitatively. Hence, a visual and a numerical assessment based on the probability of correct classification (P C ), the false classification (P f ), and the classification accuracy (Sens(%)) are presented and compared with existing techniques. The proposed method shows its superiority in the detection of points which really belong to the cells, and also the facility of counting the number of the processed cells. Conclusions Computer simulations highlight that the proposed method

  18. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  19. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  20. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  1. A new feature extraction method for signal classification applied to cord dorsum potential detection

    NASA Astrophysics Data System (ADS)

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  2. Feature extraction algorithm for 3D scene modeling and visualization using monostatic SAR

    NASA Astrophysics Data System (ADS)

    Jackson, Julie A.; Moses, Randolph L.

    2006-05-01

    We present a feature extraction algorithm to detect scattering centers in three dimensions using monostatic synthetic aperture radar imagery. We develop attributed scattering center models that describe the radar response of canonical shapes. We employ these models to characterize a complex target geometry as a superposition of simpler, low-dimensional structures. Such a characterization provides a means for target visualization. Fitting an attributed scattering model to sensed radar data is comprised of two problems: detection and estimation. The detection problem is to find canonical targets in clutter. The estimation problem then fits the detected canonical shape model with parameters, such as size and orientation, that correspond to the measured target response. We present an algorithm to detect canonical scattering structures amidst clutter and to estimate the corresponding model parameters. We employ full-polarimetric imagery to accurately classify canonical shapes. Interformetric processing allows us to estimate scattering center locations in three-dimensions. We apply the algorithm to scattering prediction data of a simple scene comprised of canonical scatterers and to scattering predictions of a backhoe.

  3. A spatial division clustering method and low dimensional feature extraction technique based indoor positioning system.

    PubMed

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  4. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 2: frequency domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    Knowledge of the structure of a flow is really significant for the proper conduct a number of industrial processes. In this case a description of a two-phase flow regimes is possible by use of the time-series analysis e.g. in frequency domain. In this article the classical spectral analysis based on Fourier Transform (FT) and Short-Time Fourier Transform (STFT) were applied for analysis of signals obtained for water-air flow using gamma ray absorption. The presented method was illustrated by use data collected in experiments carried out on the laboratory hydraulic installation with a horizontal pipe of 4.5 m length and inner diameter of 30 mm equipped with two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals. Stochastic signals obtained from detectors for plug, bubble, and transitional plug - bubble flows were considered in this work. The recorded raw signals were analyzed and several features in the frequency domain were extracted using autospectral density function (ADF), cross-spectral density function (CSDF), and the STFT spectrogram. In result of a detail analysis it was found that the most promising to recognize of the flow structure are: maximum value of the CSDF magnitude, sum of the CSDF magnitudes in the selected frequency range, and the maximum value of the sum of selected amplitudes of STFT spectrogram.

  5. Adaptive Redundant Lifting Wavelet Transform Based on Fitting for Fault Feature Extraction of Roller Bearings

    PubMed Central

    Yang, Zijing; Cai, Ligang; Gao, Lixin; Wang, Huaqing

    2012-01-01

    A least square method based on data fitting is proposed to construct a new lifting wavelet, together with the nonlinear idea and redundant algorithm, the adaptive redundant lifting transform based on fitting is firstly stated in this paper. By variable combination selections of basis function, sample number and dimension of basis function, a total of nine wavelets with different characteristics are constructed, which are respectively adopted to perform redundant lifting wavelet transforms on low-frequency approximate signals at each layer. Then the normalized lP norms of the new node-signal obtained through decomposition are calculated to adaptively determine the optimal wavelet for the decomposed approximate signal. Next, the original signal is taken for subsection power spectrum analysis to choose the node-signal for single branch reconstruction and demodulation. Experiment signals and engineering signals are respectively used to verify the above method and the results show that bearing faults can be diagnosed more effectively by the method presented here than by both spectrum analysis and demodulation analysis. Meanwhile, compared with the symmetrical wavelets constructed with Lagrange interpolation algorithm, the asymmetrical wavelets constructed based on data fitting are more suitable in feature extraction of fault signal of roller bearings. PMID:22666035

  6. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    PubMed Central

    Topouzelis, Konstantinos N.

    2008-01-01

    This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  7. Feature extraction using Hough transform for solid waste bin level detection and classification.

    PubMed

    Hannan, M A; Zaila, W A; Arebey, M; Begum, R A; Basri, H

    2014-09-01

    This paper deals with the solid waste image detection and classification to detect and classify the solid waste bin level. To do so, Hough transform techniques is used for feature extraction to identify the line detection based on image's gradient field. The feedforward neural network (FFNN) model is used to classify the level content of solid waste based on learning concept. Numbers of training have been performed using FFNN to learn and match the targets of the testing images to compute the sum squared error with the performance goal met. The images for each class are used as input samples for classification. Result from the neural network and the rules decision are used to build the receiver operating characteristic (ROC) graph. Decision graph shows the performance of the system waste system based on area under curve (AUC), WS-class reached 0.9875 for excellent result and WS-grade reached 0.8293 for good result. The system has been successfully designated with the motivation of solid waste bin monitoring system that can applied to a wide variety of local municipal authorities system. PMID:24829160

  8. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads. PMID:26829804

  9. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  10. Texture feature extraction based on wavelet transform and gray-level co-occurrence matrices applied to osteosarcoma diagnosis.

    PubMed

    Hu, Shan; Xu, Chao; Guan, Weiqiao; Tang, Yong; Liu, Yana

    2014-01-01

    Osteosarcoma is the most common malignant bone tumor among children and adolescents. In this study, image texture analysis was made to extract texture features from bone CR images to evaluate the recognition rate of osteosarcoma. To obtain the optimal set of features, Sym4 and Db4 wavelet transforms and gray-level co-occurrence matrices were applied to the image, with statistical methods being used to maximize the feature selection. To evaluate the performance of these methods, a support vector machine algorithm was used. The experimental results demonstrated that the Sym4 wavelet had a higher classification accuracy (93.44%) than the Db4 wavelet with respect to osteosarcoma occurrence in the epiphysis, whereas the Db4 wavelet had a higher classification accuracy (96.25%) for osteosarcoma occurrence in the diaphysis. Results including accuracy, sensitivity, specificity and ROC curves obtained using the wavelets were all higher than those obtained using the features derived from the GLCM method. It is concluded that, a set of texture features can be extracted from the wavelets and used in computer-aided osteosarcoma diagnosis systems. In addition, this study also confirms that multi-resolution analysis is a useful tool for texture feature extraction during bone CR image processing. PMID:24211892

  11. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these

  12. Extraction of residential information from high-spatial resolution image integrated with upscaling methods and object multi-features

    NASA Astrophysics Data System (ADS)

    Dong, Lixin; Wu, Bingfang

    2007-11-01

    Monitoring residential areas at a regional scale, and even at a global scale, has become an increasingly important topic. However, extraction of residential information was still a difficulty and challenging task, such as multiple usable data selection and automatic or semi-automatic techniques. In metropolitan area, such as Beijing, urban sprawl has brought enormous pressure on rural and natural environments. Given a case study, a new strategy of extracting of residential information integrating the upscaling methods and object multi-features was introduced in high resolution SPOT fused image. Multi-resolution dataset were built using upscaling methods, and optimal resolution image was selected by semi-variance analysis approach. Relevant optimal spatial resolution images were adopted for different type of residential area (city, town and rural residence). Secondly, object multi-features, including spectral information, generic shape features, class related features, and new computed features, were introduced. An efficient decision tree and Class Semantic Representation were set up based on object multi-features. And different classes of residential area were extracted from multi-resolution image. Afterwards, further discussion and comparison about improving the efficiency and accuracy of classification with the proposed approach were presented. The results showed that the optimal resolution image selected by upscaling and semi-variance method successfully decreased the heterogeneous, smoothed the noise influence, decreased computational, storage burdens and improved classification efficiency in high spatial resolution image. The Class Semantic Representation and decision tree based on object multi-features improved the overall accuracy and diminished the 'salt and pepper effect'. The new image analysis approach offered a satisfactory solution for extracting residential information quickly and efficiently.

  13. Feature extraction method of bearing AE signal based on improved FAST-ICA and wavelet packet energy

    NASA Astrophysics Data System (ADS)

    Han, Long; Li, Cheng Wei; Guo, Song Lin; Su, Xun Wen

    2015-10-01

    In order to accomplish the feature extraction from a mixed fault signal of bearings, this paper proposes a feature extraction method based on the improved Fast-ICA algorithm and the wavelet packet energy spectrum. The conventional fast-ICA algorithm can only separate the mixed signals, while the convergence speed is relatively slow and the convergence effect is not sufficient. The method of the third-order Newton iteration is adopted in this paper to improve the Fast-ICA algorithm. Moreover, the improved Fast-ICA algorithm is confirmed to have a faster convergence speed and higher precision than the conventional Fast-ICA algorithm. The improved Fast-ICA algorithm is applied to separate the acoustic emission signal in which two kinds of fault components are comprised. The wavelet packet energy spectrum is used to extract the feature information in the separated samples. In addition, the fault diagnosis is performed based on the SVM algorithm. It is confirmed that the slight damage and fracture of a bearing can accurately be recognized. The results show that the improved FAST-ICA and wavelet packet energy method in feature extraction is sufficiently effective.

  14. Feature extraction techniques for estimating pinon and juniper tree cover and density, and comparison with field based management surveys

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In western North America, expansion and stand infilling by piñon (Pinus spp.) and juniper (Juniperus spp.) (P-J) trees constitutes one of the greatest afforestations of our time. Feature extracted data acquired from remotely sensed imagery can help managers rapidly and accurately assess this expansi...

  15. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  16. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  17. Multidimensional, multistage wavelet footprints: a new tool for image segmentation and feature extraction in medical ultrasound

    NASA Astrophysics Data System (ADS)

    Jansen, Christian H. P.; Arigovindan, Muthuvel; Suhling, Michael; Marsch, Stefan; Unser, Michael A.; Hunziker, Patrick

    2003-05-01

    We present a new wavelet-based strategy for autonomous feature extraction and segmentation of cardiac structures in dynamic ultrasound images. Image sequences subjected to a multidimensional (2D plus time) wavelet transform yield a large number of individual subbands, each coding for partial structural and motion information of the ultrasound sequence. We exploited this fact to create an analysis strategy for autonomous analysis of cardiac ultrasound that builds on shape- and motion specific wavelet subband filters. Subband selection was in an automatic manner based on subband statistics. Such a collection of predefined subbands corresponds to the so-called footprint of the target structure and can be used as a multidimensional multiscale filter to detect and localize the target structure in the original ultrasound sequence. Autonomous, unequivocal localization by the autonomous algorithm is then done using a peak finding algorithm, allowing to compare the findings with a reference standard. Image segmentation is then possible using standard region growing operations. To test the feasibility of this multiscale footprint algorithm, we tried to localize, enhance and segment the mitral valve autonomously in 182 non-selected clinical cardiac ultrasound sequences. Correct autonomous localization by the algorithm was feasible in 165 of 182 reconstructed ultrasound sequences, using the experienced echocardiographer as reference. This corresponds to a 91% accuracy of the proposed method in unselected clinical data. Thus, multidimensional multiscale wavelet footprints allow successful autonomous detection and segmentation of the mitral valve with good accuracy in dynamic cardiac ultrasound sequences which are otherwise difficult to analyse due to their high noise level.

  18. Extraction of bistable-percept-related features from local field potential by integration of local regression and common spatial patterns.

    PubMed

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K; Liang, Hualou

    2009-08-01

    Bistable perception arises when an ambiguous stimulus under continuous view is perceived as an alternation of two mutually exclusive states. Such a stimulus provides a unique opportunity for understanding the neural basis of visual perception because it dissociates the perception from the visual input. In this paper, we focus on extracting the percept-related features from the local field potential (LFP) in monkey visual cortex for decoding its bistable structure-from-motion (SFM) perception. Our proposed feature extraction approach consists of two stages. First, we estimate and remove from each LFP trial the nonpercept-related stimulus-evoked activity via a local regression method called the locally weighted scatterplot smoothing because of the dissociation between the perception and the stimulus in our experimental paradigm. Second, we use the common spatial patterns approach to design spatial filters based on the residue signals of multiple channels to extract the percept-related features. We exploit a support vector machine (SVM) classifier on the extracted features to decode the reported perception on a single-trial basis. We apply the proposed approach to the multichannel intracortical LFP data collected from the middle temporal (MT) visual cortex in a macaque monkey performing an SFM task. We demonstrate that our approach is effective in extracting the discriminative features of the percept-related activity from LFP and achieves excellent decoding performance. We also find that the enhanced gamma band synchronization and reduced alpha and beta band desynchronization may be the underpinnings of the percept-related activity. PMID:19362902

  19. Magnetization-prepared rapid acquisition with gradient echo magnetic resonance imaging signal and texture features for the prediction of mild cognitive impairment to Alzheimer’s disease progression

    PubMed Central

    Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M.; Galván-Tejada, Jorge I.; Treviño, Victor; Tamez-Peña, Jose

    2014-01-01

    Abstract. Early diagnoses of Alzheimer’s disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different (p-value=2.04e−11). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones. PMID:26158047

  20. HypertenGene: extracting key hypertension genes from biomedical literature with position and automatically-generated template features

    PubMed Central

    2009-01-01

    Background The genetic factors leading to hypertension have been extensively studied, and large numbers of research papers have been published on the subject. One of hypertension researchers' primary research tasks is to locate key hypertension-related genes in abstracts. However, gathering such information with existing tools is not easy: (1) Searching for articles often returns far too many hits to browse through. (2) The search results do not highlight the hypertension-related genes discovered in the abstract. (3) Even though some text mining services mark up gene names in the abstract, the key genes investigated in a paper are still not distinguished from other genes. To facilitate the information gathering process for hypertension researchers, one solution would be to extract the key hypertension-related genes in each abstract. Three major tasks are involved in the construction of this system: (1) gene and hypertension named entity recognition, (2) section categorization, and (3) gene-hypertension relation extraction. Results We first compare the retrieval performance achieved by individually adding template features and position features to the baseline system. Then, the combination of both is examined. We found that using position features can almost double the original AUC score (0.8140vs.0.4936) of the baseline system. However, adding template features only results in marginal improvement (0.0197). Including both improves AUC to 0.8184, indicating that these two sets of features are complementary, and do not have overlapping effects. We then examine the performance in a different domain--diabetes, and the result shows a satisfactory AUC of 0.83. Conclusion Our approach successfully exploits template features to recognize true hypertension-related gene mentions and position features to distinguish key genes from other related genes. Templates are automatically generated and checked by biologists to minimize labor costs. Our approach integrates the

  1. The sidebar template and extraction of invariant feature of calligraphy and painting seal

    NASA Astrophysics Data System (ADS)

    Hu, Zheng-kun; Bao, Hong; Lou, Hai-tao

    2009-07-01

    The paper propose a novel seal extract method through template matching based on the characteristics of the external contour of the seal image in Chinese Painting and Calligraphy. By analyzing the characteristics of the seal edge, we obtain the priori knowledge of the seal edge, and set up the outline template of the seals, then design a template matching method by computing the distance difference between the outline template and the seal image edge which can extract seal image from Chinese Painting and Calligraphy effectively. This method is proved to have higher extraction rate by experiment results than the traditional image extract methods.

  2. Development of feature extraction analysis for a multi-functional optical profiling device applied to field engineering applications

    NASA Astrophysics Data System (ADS)

    Han, Xu; Xie, Guangping; Laflen, Brandon; Jia, Ming; Song, Guiju; Harding, Kevin G.

    2015-05-01

    In the real application environment of field engineering, a large variety of metrology tools are required by the technician to inspect part profile features. However, some of these tools are burdensome and only address a sole application or measurement. In other cases, standard tools lack the capability of accessing irregular profile features. Customers of field engineering want the next generation metrology devices to have the ability to replace the many current tools with one single device. This paper will describe a method based on the ring optical gage concept to the measurement of numerous kinds of profile features useful for the field technician. The ring optical system is composed of a collimated laser, a conical mirror and a CCD camera. To be useful for a wide range of applications, the ring optical system requires profile feature extraction algorithms and data manipulation directed toward real world applications in field operation. The paper will discuss such practical applications as measuring the non-ideal round hole with both off-centered and oblique axes. The algorithms needed to analyze other features such as measuring the width of gaps, radius of transition fillets, fall of step surfaces, and surface parallelism will also be discussed in this paper. With the assistance of image processing and geometric algorithms, these features can be extracted with a reasonable performance. Tailoring the feature extraction analysis to this specific gage offers the potential for a wider application base beyond simple inner diameter measurements. The paper will present experimental results that are compared with standard gages to prove the performance and feasibility of the analysis in real world field engineering. Potential accuracy improvement methods, a new dual ring design and future work will be discussed at the end of this paper.

  3. Structural class prediction of protein using novel feature extraction method from chaos game representation of predicted secondary structure.

    PubMed

    Zhang, Lichao; Kong, Liang; Han, Xiaodong; Lv, Jinfeng

    2016-07-01

    Protein structural class prediction plays an important role in protein structure and function analysis, drug design and many other biological applications. Extracting good representation from protein sequence is fundamental for this prediction task. In recent years, although several secondary structure based feature extraction strategies have been specially proposed for low-similarity protein sequences, the prediction accuracy still remains limited. To explore the potential of secondary structure information, this study proposed a novel feature extraction method from the chaos game representation of predicted secondary structure to mainly capture sequence order information and secondary structure segments distribution information in a given protein sequence. Several kinds of prediction accuracies obtained by the jackknife test are reported on three widely used low-similarity benchmark datasets (25PDB, 1189 and 640). Compared with the state-of-the-art prediction methods, the proposed method achieves the highest overall accuracies on all the three datasets. The experimental results confirm that the proposed feature extraction method is effective for accurate prediction of protein structural class. Moreover, it is anticipated that the proposed method could be extended to other graphical representations of protein sequence and be helpful in future research. PMID:27084358

  4. Feature engineering combined with machine learning and rule-based methods for structured information extraction from narrative clinical discharge summaries

    PubMed Central

    Xu, Yan; Hong, Kai; Tsujii, Junichi

    2012-01-01

    Objective A system that translates narrative text in the medical domain into structured representation is in great demand. The system performs three sub-tasks: concept extraction, assertion classification, and relation identification. Design The overall system consists of five steps: (1) pre-processing sentences, (2) marking noun phrases (NPs) and adjective phrases (APs), (3) extracting concepts that use a dosage-unit dictionary to dynamically switch two models based on Conditional Random Fields (CRF), (4) classifying assertions based on voting of five classifiers, and (5) identifying relations using normalized sentences with a set of effective discriminating features. Measurements Macro-averaged and micro-averaged precision, recall and F-measure were used to evaluate results. Results The performance is competitive with the state-of-the-art systems with micro-averaged F-measure of 0.8489 for concept extraction, 0.9392 for assertion classification and 0.7326 for relation identification. Conclusions The system exploits an array of common features and achieves state-of-the-art performance. Prudent feature engineering sets the foundation of our systems. In concept extraction, we demonstrated that switching models, one of which is especially designed for telegraphic sentences, improved extraction of the treatment concept significantly. In assertion classification, a set of features derived from a rule-based classifier were proven to be effective for the classes such as conditional and possible. These classes would suffer from data scarcity in conventional machine-learning methods. In relation identification, we use two-staged architecture, the second of which applies pairwise classifiers to possible candidate classes. This architecture significantly improves performance. PMID:22586067

  5. Vibration signal analysis using parameterized time-frequency method for features extraction of varying-speed rotary machinery

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Dong, X. J.; Peng, Z. K.; Zhang, W. M.; Meng, G.

    2015-01-01

    In real application, when rotary machinery frequently involves variable-speed, unsteady load and defect, it will produce non-stationary vibration signal. Such signal can be characterized by mono- or multi-component frequency modulation (FM) and its internal instantaneous patterns are closely related to operation condition of the rotary machinery. For example, instantaneous frequency (IF) and instantaneous amplitude (IA) of a non-stationary signal are two important time-frequency features to be inspected. For vibration signal analysis of the rotary machinery, time-frequency analysis (TFA), known for analyzing the signal in the time and frequency domain simultaneously, has been accepted as a key signal processing tool. Particularly, parameterized TFA, among various TFAs, has shown great potential to investigate time-frequency features of non-stationary signals. It attracts more attention for improving time-frequency representation (TFR) with signal-dependent transform parameters. However, the parameter estimation and component separation are two problems to tackle with while using the parameterized TFA to extract time-frequency features from non-stationary vibration signal of varying-speed rotary machinery. In this paper, we propose a procedure for the parameterized TFA to analyze the non-stationary vibration signal of varying-speed rotary machinery. It basically includes four steps: initialization, estimation of transform parameter, component separation and parameterized TFA, as well as feature extraction. To demonstrate the effectiveness of the proposed method in analyzing mono- and multi-component signals, it is first used to analyze the vibration response of a laboratory rotor during a speed-up and run-down process, and then extract the instantaneous time-frequency signatures of a hydro-turbine rotor in a hydroelectric power station during a shut-down stage. In addition, the results are compared with several traditional TFAs and the proposed method outperforms

  6. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  7. Semi-automatic methodologies for landslide features extraction: new opportunities but also challenges from high resolution topography

    NASA Astrophysics Data System (ADS)

    Tarolli, P.; Sofia, G.; Dalla Fontana, G.

    2009-12-01

    In recent years new remotely sensed technologies, such as airborne and terrestrial laser scanner, have improved the detail, and the quality of topographic data with notable advantages over traditional survey techniques (Tarolli et al., 2009). A new generation of high resolution (≤3 m) Digital Terrain Models (DTMs) are now available for different areas, and widely used by researchers, offering new opportunities for the scientific community. These data call for the development of the new generation of methodologies for objective extraction of geomorphic features, such as channel heads, channel networks, landslide scars, etc. A high resolution DTM, for example, is able to detect in detail the divergence/convergence areas related to unchannelized/channelized processes respect to a coarse DTM. In last few years different studies used the landform curvature as a useful measure for the interpretation of dominant landform processes (Tarolli and Dalla Fontana, 2009). Curvature has been used to analyze landslide morphology and distribution, and to objectively extract the channel network. In this work, we test the performances of some of these new methodologies for geomorphic features extraction, in order to provide a semi-automatic method to recognize landslide scars in a complex mountainous terrain. The analysis has been carried out using a very high resolution DTM (0.5 m), and different sizes of the moving window for the landform curvature calculation. Statistical dispersion measures (standard deviation, interquartile range, mean and median absolute deviation), and probability plots (quantile-quantile plot) were adopted to objectively define the thresholds of curvature for landslide features extraction. The study was conducted on a study area located in the Eastern Italian Alps, where recent accurate field surveys by DGPS on landslide scars, and a high quality set of airborne laser scanner elevation data are available. The results indicate that curvature maps derived by

  8. A new approach for EEG feature extraction in P300-based lie detection.

    PubMed

    Abootalebi, Vahid; Moradi, Mohammad Hassan; Khalilzadeh, Mohammad Ali

    2009-04-01

    P300-based Guilty Knowledge Test (GKT) has been suggested as an alternative approach for conventional polygraphy. The purpose of this study was to extend a previously introduced pattern recognition method for the ERP assessment in this application. This extension was done by the further extending the feature set and also the employing a method for the selection of optimal features. For the evaluation of the method, several subjects went through the designed GKT paradigm and their respective brain signals were recorded. Next, a P300 detection approach based on some features and a statistical classifier was implemented. The optimal feature set was selected using a genetic algorithm from a primary feature set including some morphological, frequency and wavelet features and was used for the classification of the data. The rates of correct detection in guilty and innocent subjects were 86%, which was better than other previously used methods. PMID:19041154

  9. A Novel Feature Extraction Approach Using Window Function Capturing and QPSO-SVM for Enhancing Electronic Nose Performance

    PubMed Central

    Guo, Xiuzhen; Peng, Chao; Zhang, Songlin; Yan, Jia; Duan, Shukai; Wang, Lidan; Jia, Pengfei; Tian, Fengchun

    2015-01-01

    In this paper, a novel feature extraction approach which can be referred to as moving window function capturing (MWFC) has been proposed to analyze signals of an electronic nose (E-nose) used for detecting types of infectious pathogens in rat wounds. Meanwhile, a quantum-behaved particle swarm optimization (QPSO) algorithm is implemented in conjunction with support vector machine (SVM) for realizing a synchronization optimization of the sensor array and SVM model parameters. The results prove the efficacy of the proposed method for E-nose feature extraction, which can lead to a higher classification accuracy rate compared to other established techniques. Meanwhile it is interesting to note that different classification results can be obtained by changing the types, widths or positions of windows. By selecting the optimum window function for the sensor response, the performance of an E-nose can be enhanced. PMID:26131672

  10. A Novel Feature Extraction Approach Using Window Function Capturing and QPSO-SVM for Enhancing Electronic Nose Performance.

    PubMed

    Guo, Xiuzhen; Peng, Chao; Zhang, Songlin; Yan, Jia; Duan, Shukai; Wang, Lidan; Jia, Pengfei; Tian, Fengchun

    2015-01-01

    In this paper, a novel feature extraction approach which can be referred to as moving window function capturing (MWFC) has been proposed to analyze signals of an electronic nose (E-nose) used for detecting types of infectious pathogens in rat wounds. Meanwhile, a quantum-behaved particle swarm optimization (QPSO) algorithm is implemented in conjunction with support vector machine (SVM) for realizing a synchronization optimization of the sensor array and SVM model parameters. The results prove the efficacy of the proposed method for E-nose feature extraction, which can lead to a higher classification accuracy rate compared to other established techniques. Meanwhile it is interesting to note that different classification results can be obtained by changing the types, widths or positions of windows. By selecting the optimum window function for the sensor response, the performance of an E-nose can be enhanced. PMID:26131672

  11. Segmentation-based filtering and object-based feature extraction from airborne LiDAR point cloud data

    NASA Astrophysics Data System (ADS)

    Chang, Jie

    Three dimensional (3D) information about ground and above-ground features such as buildings and trees is important for many urban and environmental applications. Recent developments in Light Detection And Ranging (LiDAR) technology provide promising alternatives to conventional techniques for acquiring such information. The focus of this dissertation research is to effectively and efficiently filter massive airborne LiDAR point cloud data and to extract main above-ground features such as buildings and trees in the urban area. A novel segmentation algorithm for point cloud data, namely the 3D k mutual nearest neighborhood (kMNN) segmentation algorithm, was developed based on the improvement to the kMNN clustering algorithm by employing distances in 3D space to define mutual nearest neighborhoods. A set of optimization strategies, including dividing dataset into multiple blocks and small size grids, and using distance thresholds in x and y, were implemented to improve the efficiency of the segmentation algorithm. A segmentation based filtering method was then employed to filter the generated segments, which first generates segment boundaries using Voronoi polygon and dissolving operations, and then labels the segments as ground and above-ground based on their size and relative heights to the surrounding segments. An object-based feature extraction approach was also devised to extract buildings and trees from the above-ground segments based on object-level statistics derived, which were subject to a rule based classification system developed by either human experts or an inductive machine-learning algorithm. Case studies were conducted with four different LiDAR datasets to evaluate the effectiveness and efficiency of the proposed approaches. The proposed segmentation algorithm proved to be not only effective in separating ground and above-ground measurements into different segments, but also efficient in processing large datasets. The segmentation based filtering and

  12. Sonar signal feature extraction for target recognition in range-dependent environments

    NASA Astrophysics Data System (ADS)

    Gomatam, Vikram T.; Loughlin, Patrick

    2013-05-01

    In previous work, we have given a method for obtaining propagation invariant features for classification of underwater objects from their sonar backscatter in dispersive but rangG'-independent environments. In this paper we consider the derivation of invariant features for classification in range­ dependent environments, based on the parabolic equation.

  13. Ultrasound Color Doppler Image Segmentation and Feature Extraction in MCP and Wrist Region in Evaluation of Rheumatoid Arthritis.

    PubMed

    Snekhalatha, U; Muthubhairavi, V; Anburajan, M; Gupta, Neelkanth

    2016-09-01

    The present study focuses on automatically to segment the blood flow pattern of color Doppler ultrasound in hand region of rheumatoid arthritis patients and to correlate the extracted the statistical features and color Doppler parameters with standard parameters. Thirty patients with rheumatoid arthritis (RA) and their total of 300 joints of both the hands, i.e., 240 MCP and 60 wrists were examined in this study. Ultrasound color Doppler of both the hands of all the patients was obtained. Automated segmentation of color Doppler image was performed using color enhancement scaling based segmentation algorithm. The region of interest is fixed in the MCP joints and wrist of the hand. Features were extracted from the defined ROI of the segmented output image. The color fraction was measured using Mimics software. The standard parameters such as HAQ score, DAS 28 score, and ESR was obtained for all the patients. The color fraction tends to be increased in wrist and MCP3 joints which indicate the increased blood flow pattern and color Doppler activity as part of inflammation in hand joints of RA. The ESR correlated significantly with the feature extracted parameters such as mean, standard deviation and entropy in MCP3, MCP4 joint and the wrist region. The developed automated color image segmentation algorithm provides a quantitative analysis for diagnosis and assessment of RA. The correlation study between the color Doppler parameters with the standard parameters provides moral significance in quantitative analysis of RA in MCP3 joint and the wrist region. PMID:27449351

  14. A general sequential Monte Carlo method based optimal wavelet filter: A Bayesian approach for extracting bearing fault features

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Sun, Shilong; Tse, Peter W.

    2015-02-01

    A general sequential Monte Carlo method, particularly a general particle filter, attracts much attention in prognostics recently because it is able to on-line estimate posterior probability density functions of the state functions used in a state space model without making restrictive assumptions. In this paper, the general particle filter is introduced to optimize a wavelet filter for extracting bearing fault features. The major innovation of this paper is that a joint posterior probability density function of wavelet parameters is represented by a set of random particles with their associated weights, which is seldom reported. Once the joint posterior probability density function of wavelet parameters is derived, the approximately optimal center frequency and bandwidth can be determined and be used to perform an optimal wavelet filtering for extracting bearing fault features. Two case studies are investigated to illustrate the effectiveness of the proposed method. The results show that the proposed method provides a Bayesian approach to extract bearing fault features. Additionally, the proposed method can be generalized by using different wavelet functions and metrics and be applied more widely to any other situation in which the optimal wavelet filtering is required.

  15. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  16. Classification Features of US Images Liver Extracted with Co-occurrence Matrix Using the Nearest Neighbor Algorithm

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Bibicu, Dorin; Moraru, Luminita; Nicolae, Mariana Carmen

    2011-12-01

    Co-occurrence matrix has been applied successfully for echographic images characterization because it contains information about spatial distribution of grey-scale levels in an image. The paper deals with the analysis of pixels in selected regions of interest of an US image of the liver. The useful information obtained refers to texture features such as entropy, contrast, dissimilarity and correlation extract with co-occurrence matrix. The analyzed US images were grouped in two distinct sets: healthy liver and steatosis (or fatty) liver. These two sets of echographic images of the liver build a database that includes only histological confirmed cases: 10 images of healthy liver and 10 images of steatosis liver. The healthy subjects help to compute four textural indices and as well as control dataset. We chose to study these diseases because the steatosis is the abnormal retention of lipids in cells. The texture features are statistical measures and they can be used to characterize irregularity of tissues. The goal is to extract the information using the Nearest Neighbor classification algorithm. The K-NN algorithm is a powerful tool to classify features textures by means of grouping in a training set using healthy liver, on the one hand, and in a holdout set using the features textures of steatosis liver, on the other hand. The results could be used to quantify the texture information and will allow a clear detection between health and steatosis liver.

  17. A procedure for the extraction of airglow features in the presence of strong background radiation

    NASA Astrophysics Data System (ADS)

    Swift, W. R.; Torr, D. G.; Hamilton, C.; Dougani, H.; Torr, M. R.

    1990-09-01

    A technique is developed that can be used to derive the total intensity of band emissions from twilight airglow measurements when the basic spectral signature of the band to be considered is known. The method is designed to automatically extract total band or line intensities of a signal imbedded in background radiation several orders of magnitude greater in brightness. It is shown that the technique developed can reliably measure the intensity of both weak and strong band and line emissions in the presence of strong twilight background radiation. The method of extraction is shown as part of a general purpose spectral analysis program written in VAX FORTRAN. This extraction procedure has been used successfully on emissions of Fel, Ca(+), N2(+) (1N) (0-0) and (0-1), OH in the near UV. OI red (630nm) and green (558nm) lines in the visible, and the OH Meinel bands and O(+) (2P) 732 nm in the near IR.

  18. Identification of cancerous gastric cells based on common features extracted from hyperspectral microscopic images

    PubMed Central

    Zhu, Siqi; Su, Kang; Liu, Yumeng; Yin, Hao; Li, Zhen; Huang, Furong; Chen, Zhenqiang; Chen, Weidong; Zhang, Ge; Chen, Yihong

    2015-01-01

    We construct a microscopic hyperspectral imaging system to distinguish between normal and cancerous gastric cells. We study common transmission-spectra features that only emerge when the samples are dyed with hematoxylin and eosin (H&E) stain. Subsequently, we classify the obtained visible-range transmission spectra of the samples into three zones. Distinct features are observed in the spectral responses between the normal and cancerous cell nuclei in each zone, which depend on the pH level of the cell nucleus. Cancerous gastric cells are precisely identified according to these features. The average cancer-cell identification accuracy obtained with a backpropagation algorithm program trained with these features is 95%. PMID:25909000

  19. Non-classical nonlinear feature extraction from standard resonance vibration data for damage detection.

    PubMed

    Eiras, J N; Monzó, J; Payá, J; Kundu, T; Popovics, J S

    2014-02-01

    Dynamic non-classical nonlinear analyses show promise for improved damage diagnostics in materials that exhibit such structure at the mesoscale, such as concrete. In this study, nonlinear non-classical dynamic material behavior from standard vibration test data, using pristine and frost damaged cement mortar bar samples, is extracted and quantified. The procedure is robust and easy to apply. The results demonstrate that the extracted nonlinear non-classical parameters show expected sensitivity to internal damage and are more sensitive to changes owing to internal damage levels than standard linear vibration parameters. PMID:25234919

  20. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  1. Time-frequency manifold for nonlinear feature extraction in machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    He, Qingbo

    2013-02-01

    Time-frequency feature is beneficial to representation of non-stationary signals for effective machinery fault diagnosis. The time-frequency distribution (TFD) is a major tool to reveal the synthetic time-frequency pattern. However, the TFD will also face noise corruption and dimensionality reduction issues in engineering applications. This paper proposes a novel nonlinear time-frequency feature based on a time-frequency manifold (TFM) technique. The new TFM feature is generated by mainly addressing manifold learning on the TFDs in a reconstructed phase space. It combines the non-stationary information and the nonlinear information of analyzed signals, and hence exhibits valuable properties. Specifically, the new feature is a quantitative low-dimensional representation, and reveals the intrinsic time-frequency pattern related to machinery health, which can effectively overcome the effects of noise and condition variance issues in sampling signals. The effectiveness and the merits of the proposed TFM feature are confirmed by case study on gear wear diagnosis, bearing defect identification and defect severity evaluation. Results show the value and potential of the new feature in machinery fault pattern representation and classification.

  2. Ultraviolet-visible absorptive features of water extractable and humic fractions of animal manure and compost

    Technology Transfer Automated Retrieval System (TEKTRAN)

    UV-vis spectroscopy is a useful tool for characterizing water extractable or humic fractions of natural organic matter (WEOM). Whereas the whole UV-visible spectra of these fractions are more or less featureless, the specific UV absorptivity at 254 and 280 nm as well as spectral E2/E3 and E4/E6 rat...

  3. Feature Extraction of PDV Challenge Data Set A with Digital Down Shift (DDS)

    SciTech Connect

    Tunnell, T. W.

    2012-10-18

    This slide-show is about data analysis in photonic Doppler velocimetry. The digital down shift subtracts a specified velocity (frequency) from all components in the Fourier frequency domain and generates both the down shifted in-phase and out-of-phase waveforms so that phase and displacement can be extracted through a continuous unfold of the arctangent.

  4. Summary of work on shock wave feature extraction in 3-D datasets

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus (Principal Investigator)

    1996-01-01

    A method for extracting and visualizing shock waves from three dimensional data-sets is discussed. Issues concerning computation time, robustness to numerical perturbations, and noise introduction are considered and compared with other methods. Finally, results using this method are discussed.

  5. Discrimination of photon from proton irradiation using glow curve feature extraction and vector analysis.

    PubMed

    Skopec, M; Loew, M; Price, J L; Guardala, N; Moscovitch, M

    2006-01-01

    Two types of thermoluminescence dosemeters (TLDs), the Harshaw LiF:Mg,Ti (TLD-100) and CaF(2):Tm (TLD-300) were investigated for their glow curve response to separate photon and proton irradiations. The TLDs were exposed to gamma irradiation from a (137)Cs source and proton irradiation using a positive ion accelerator. The glow curve peak structure for each individual TLD exposure was deconvolved to obtain peak height, width, and position. Simulated mixed-field glow curves were obtained by superposition of the experimentally obtained single field exposures. Feature vectors were composed of two kinds of features: those from deconvolution and those taken in the neighbourhood of several glow curve peaks. The inner product of the feature vectors was used to discriminate among the pure photon, pure proton and simulated mixed-field irradiations. In the pure cases, identification of radiation types is both straightforward and effective. Mixed-field discrimination did not succeed using deconvolution features, but the peak-neighbourhood features proved to discriminate reliably. PMID:16614091

  6. Feature extraction and pattern classification of remote sensing data by a modular neural system

    NASA Astrophysics Data System (ADS)

    Blonda, Palma; la Forgia, Vincenza; Pasquariello, Guido; Satalino, Giuseppe

    1996-02-01

    A modular neural network architecture has been used for the classification of remote sensed data in two experiments carried out to study two different but rather usual situations in real remote sensing applications. Such situations concern the availability of high-dimensional data in the first setting and an imperfect data set with a limited number of features in the second. The learning task of the supervised multilayer perceptron classifier has been made more efficient by preprocessing the input with unsupervised neural modules for feature discovery. The linear propagation network is introduced in the first experiment to evaluate the effectiveness of the neural data compression stage before classification, whereas in the second experiment data clustering before labeling is evaluated by the Kohonen self-organizing feature map network. The results of the two experiments confirm that modular learning performs better than nonmodular learning with respect to both learning quality and speed.

  7. Projection-based geometrical feature extraction for computer vision: Algorithms in pipeline architectures

    SciTech Connect

    Sanz, J.L.; Dinstein, I.

    1987-01-01

    In this paper, some image transforms and features such as projections along linear patterns, convex hull approximations, Hough transform for line detection, diameter, moments, and principal components will be considered. Specifically, we present algorithms for computing these features which are suitable for implementation in image analysis pipeline architectures. In particular, random access memories and other dedicated hardware components which may be found in the implementation of classical techniques are not longer needed in our algorithms. The effectiveness of our approach is demonstrated by running some of the new algorithms in conventional short-pipelines for image analysis.

  8. Use of Landsat-derived temporal profiles for corn-soybean feature extraction and classification

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Carnes, J. G.; Austin, W. W.

    1982-01-01

    A physical model is presented, which has been derived from multitemporal-multispectral data acquired by Landsat satellites to describe the behavior and new features that are crop specific. A feasibility study over 40 sites was performed to classify the segment pixels into those of corn, soybeans, and others using the new features and a linear classifier. Results agree well with other existing methods, and it is shown the multitemporal-multispectral scanner data can be transformed into two parameters that are closely related to the target of interest and thus can be used in classification. The approach is less time intensive than other techniques and requires labeling of only pure pixels.

  9. Extraction of morphotectonic features from DEMs: Development and applications for study areas in Hungary and NW Greece

    NASA Astrophysics Data System (ADS)

    Jordan, G.; Meijninger, B. M. L.; Hinsbergen, D. J. J. van; Meulenkamp, J. E.; Dijk, P. M. van

    2005-11-01

    A procedure for the consistent application of digital terrain analysis methods to identify tectonic phenomena from geomorphology is developed and presented through two case studies. Based on the study of landforms related to faults, geomorphological characteristics are translated into mathematical and numerical algorithms. Topographic features represented by digital elevation models of the test areas were extracted, described and interpreted in terms of structural geology and geomorphology. Digital terrain modelling was carried out by means of the combined use of: (1) numerical differential geometry methods, (2) digital drainage network analysis, (3) digital geomorphometry, (4) digital image processing, (5) lineament extraction and analysis, (6) spatial and statistical analysis and (7) digital elevation model-specific digital methods, such as shaded relief models, digital cross-sections and 3D surface modelling. A sequential modelling scheme was developed and implemented to analyse two selected study sites, in Hungary and NW Greece on local and regional scales. Structural information from other sources, such as geological and geophysical maps, remotely sensed images and field observations were analysed with geographic information system techniques. Digital terrain analysis methods applied in the proposed way in this study could extract morphotectonic features from DEMs along known faults and they contributed to the tectonic interpretation of the study areas.

  10. Feature extraction of rolling bearing’s early weak fault based on EEMD and tunable Q-factor wavelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Hongchao; Chen, Jin; Dong, Guangming

    2014-10-01

    When early weak fault emerges in rolling bearing the fault feature is too weak to extract using the traditional fault diagnosis methods such as Fast Fourier Transform (FFT) and envelope demodulation. The tunable Q-factor wavelet transform (TQWT) is the improvement of traditional one single Q-factor wavelet transform, and it is very fit for separating the low Q-factor transient impact component from the high Q-factor sustained oscillation components when fault emerges in rolling bearing. However, it is hard to extract the rolling bearing’ early weak fault feature perfectly using the TQWT directly. Ensemble empirical mode decomposition (EEMD) is the improvement of empirical mode decomposition (EMD) which not only has the virtue of self-adaptability of EMD but also overcomes the mode mixing problem of EMD. The original signal of rolling bearing’ early weak fault is decomposed by EEMD and several intrinsic mode functions (IMFs) are obtained. Then the IMF with biggest kurtosis index value is selected and handled by the TQWT subsequently. At last, the envelope demodulation method is applied on the low Q-factor transient impact component and satisfactory extraction result is obtained.

  11. Modelling and representation issues in automated feature extraction from aerial and satellite images

    NASA Astrophysics Data System (ADS)

    Sowmya, Arcot; Trinder, John

    New digital systems for the processing of photogrammetric and remote sensing images have led to new approaches to information extraction for mapping and Geographic Information System (GIS) applications, with the expectation that data can become more readily available at a lower cost and with greater currency. Demands for mapping and GIS data are increasing as well for environmental assessment and monitoring. Hence, researchers from the fields of photogrammetry and remote sensing, as well as computer vision and artificial intelligence, are bringing together their particular skills for automating these tasks of information extraction. The paper will review some of the approaches used in knowledge representation and modelling for machine vision, and give examples of their applications in research for image understanding of aerial and satellite imagery.

  12. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  13. Application of computer-extracted breast tissue texture features in predicting false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Choi, Jae Y.; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2014-03-01

    Mammographic texture features have been shown to have value in breast cancer risk assessment. Previous models have also been developed that use computer-extracted mammographic features of breast tissue complexity to predict the risk of false-positive (FP) recall from breast cancer screening with digital mammography. This work details a novel locallyadaptive parenchymal texture analysis algorithm that identifies and extracts mammographic features of local parenchymal tissue complexity potentially relevant for false-positive biopsy prediction. This algorithm has two important aspects: (1) the adaptive nature of automatically determining an optimal number of region-of-interests (ROIs) in the image and each ROI's corresponding size based on the parenchymal tissue distribution over the whole breast region and (2) characterizing both the local and global mammographic appearances of the parenchymal tissue that could provide more discriminative information for FP biopsy risk prediction. Preliminary results show that this locallyadaptive texture analysis algorithm, in conjunction with logistic regression, can predict the likelihood of false-positive biopsy with an ROC performance value of AUC=0.92 (p<0.001) with a 95% confidence interval [0.77, 0.94]. Significant texture feature predictors (p<0.05) included contrast, sum variance and difference average. Sensitivity for false-positives was 51% at the 100% cancer detection operating point. Although preliminary, clinical implications of using prediction models incorporating these texture features may include the future development of better tools and guidelines regarding personalized breast cancer screening recommendations. Further studies are warranted to prospectively validate our findings in larger screening populations and evaluate their clinical utility.

  14. Characterizing the textural features of gold ores for optimizing gold extraction

    NASA Astrophysics Data System (ADS)

    Hausen, Donald M.

    2000-04-01

    The beneficiation of gold ores begins with an examination and classification of the types of gold occurrences and recovery methods. Measurements can provide the necessary grind size for liberation and determine the sizes and associations of gold with gangue materials. In this article, the textural features several gold occurrences are described and compared.

  15. A novel feature extraction scheme with ensemble coding for protein-protein interaction prediction.

    PubMed

    Du, Xiuquan; Cheng, Jiaxing; Zheng, Tingting; Duan, Zheng; Qian, Fulan

    2014-01-01

    Protein-protein interactions (PPIs) play key roles in most cellular processes, such as cell metabolism, immune response, endocrine function, DNA replication, and transcription regulation. PPI prediction is one of the most challenging problems in functional genomics. Although PPI data have been increasing because of the development of high-throughput technologies and computational methods, many problems are still far from being solved. In this study, a novel predictor was designed by using the Random Forest (RF) algorithm with the ensemble coding (EC) method. To reduce computational time, a feature selection method (DX) was adopted to rank the features and search the optimal feature combination. The DXEC method integrates many features and physicochemical/biochemical properties to predict PPIs. On the Gold Yeast dataset, the DXEC method achieves 67.2% overall precision, 80.74% recall, and 70.67% accuracy. On the Silver Yeast dataset, the DXEC method achieves 76.93% precision, 77.98% recall, and 77.27% accuracy. On the human dataset, the prediction accuracy reaches 80% for the DXEC-RF method. We extended the experiment to a bigger and more realistic dataset that maintains 50% recall on the Yeast All dataset and 80% recall on the Human All dataset. These results show that the DXEC method is suitable for performing PPI prediction. The prediction service of the DXEC-RF classifier is available at http://ailab.ahu.edu.cn:8087/ DXECPPI/index.jsp. PMID:25046746

  16. Extraction of features from ultrasound acoustic emissions: a tool to assess the hydraulic vulnerability of Norway spruce trunkwood?

    PubMed Central

    Rosner, Sabine; Klein, Andrea; Wimmer, Rupert; Karlsson, Bo

    2011-01-01

    Summary • The aim of this study was to assess the hydraulic vulnerability of Norway spruce (Picea abies) trunkwood by extraction of selected features of acoustic emissions (AEs) detected during dehydration of standard size samples. • The hydraulic method was used as the reference method to assess the hydraulic vulnerability of trunkwood of different cambial ages. Vulnerability curves were constructed by plotting the percentage loss of conductivity vs an overpressure of compressed air. • Differences in hydraulic vulnerability were very pronounced between juvenile and mature wood samples; therefore, useful AE features, such as peak amplitude, duration and relative energy, could be filtered out. The AE rates of signals clustered by amplitude and duration ranges and the AE energies differed greatly between juvenile and mature wood at identical relative water losses. • Vulnerability curves could be constructed by relating the cumulated amount of relative AE energy to the relative loss of water and to xylem tension. AE testing in combination with feature extraction offers a readily automated and easy to use alternative to the hydraulic method. PMID:16771986

  17. Unsupervised clustering analyses of features extraction for a caries computer-assisted diagnosis using dental fluorescence images

    NASA Astrophysics Data System (ADS)

    Bessani, Michel; da Costa, Mardoqueu M.; Lins, Emery C. C. C.; Maciel, Carlos D.

    2014-02-01

    Computer-assisted diagnoses (CAD) are performed by systems with embedded knowledge. These systems work as a second opinion to the physician and use patient data to infer diagnoses for health problems. Caries is the most common oral disease and directly affects both individuals and the society. Here we propose the use of dental fluorescence images as input of a caries computer-assisted diagnosis. We use texture descriptors together with statistical pattern recognition techniques to measure the descriptors performance for the caries classification task. The data set consists of 64 fluorescence images of in vitro healthy and carious teeth including different surfaces and lesions already diagnosed by an expert. The texture feature extraction was performed on fluorescence images using RGB and YCbCr color spaces, which generated 35 different descriptors for each sample. Principal components analysis was performed for the data interpretation and dimensionality reduction. Finally, unsupervised clustering was employed for the analysis of the relation between the output labeling and the diagnosis of the expert. The PCA result showed a high correlation between the extracted features; seven components were sufficient to represent 91.9% of the original feature vectors information. The unsupervised clustering output was compared with the expert classification resulting in an accuracy of 96.88%. The results show the high accuracy of the proposed approach in identifying carious and non-carious teeth. Therefore, the development of a CAD system for caries using such an approach appears to be promising.

  18. Automated extraction of absorption features from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geophysical and Environmental Research Imaging Spectrometer (GERIS) data

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Calvin, Wendy M.; Seznec, Olivier

    1988-01-01

    Automated techniques were developed for the extraction and characterization of absorption features from reflectance spectra. The absorption feature extraction algorithms were successfully tested on laboratory, field, and aircraft imaging spectrometer data. A suite of laboratory spectra of the most common minerals was analyzed and absorption band characteristics tabulated. A prototype expert system was designed, implemented, and successfully tested to allow identification of minerals based on the extracted absorption band characteristics. AVIRIS spectra for a site in the northern Grapevine Mountains, Nevada, have been characterized and the minerals sericite (fine grained muscovite) and dolomite were identified. The minerals kaolinite, alunite, and buddingtonite were identified and mapped for a site at Cuprite, Nevada, using the feature extraction algorithms on the new Geophysical and Environmental Research 64 channel imaging spectrometer (GERIS) data. The feature extraction routines (written in FORTRAN and C) were interfaced to the expert system (written in PROLOG) to allow both efficient processing of numerical data and logical spectrum analysis.

  19. Classification of Spectra of Emission-line Stars Using Feature Extraction Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Bromová P.; Bařina, D.; Škoda, P.; Vážný, J.; Zendulka, J.

    2014-05-01

    Our goal is to automatically identify spectra of emission (Be) stars in large archives and classify their types based on a typical shape of the Hα emission line. Due to the length of spectra, of the original data is very time-consuming. In order to lower computational requirements and enhance the separability of the classes, we have to find a reduced representation of spectral features, however conserving most of the original information content. As the Be stars show a number of different shapes of emission lines, it is not easy to construct simple criteria (like e.g. Gaussian fits) to distinguish the emission lines in an automatic manner. We proposed to perform the wavelet transform of the spectra, calculate statistical metrics from the wavelet coefficients, and use them as feature vectors for classification. In this paper, we compare different wavelet transforms, different wavelets, and different statistical metrics in an attempt to identify the best method.

  20. Improving the detection of wind fields from LIDAR aerosol backscatter using feature extraction

    NASA Astrophysics Data System (ADS)

    Bickel, Brady R.; Rotthoff, Eric R.; Walters, Gage S.; Kane, Timothy J.; Mayor, Shane D.

    2016-04-01

    The tracking of winds and atmospheric features has many applications, from predicting and analyzing weather patterns in the upper and lower atmosphere to monitoring air movement from pig and chicken farms. Doppler LIDAR systems exist to quantify the underlying wind speeds, but cost of these systems can sometimes be relatively high, and processing limitations exist. The alternative is using an incoherent LIDAR system to analyze aerosol backscatter. Improving the detection and analysis of wind information from aerosol backscatter LIDAR systems will allow for the adoption of these relatively low cost instruments in environments where the size, complexity, and cost of other options are prohibitive. Using data from a simple aerosol backscatter LIDAR system, we attempt to extend the processing capabilities by calculating wind vectors through image correlation techniques to improve the detection of wind features.

  1. Simple way to extract solutions and features of the Dirac equation for a noncentral potentials

    NASA Astrophysics Data System (ADS)

    Barakat, T.; Abdalla, M. Sebawe

    2016-02-01

    The main purpose of the present study is to explore the classes of the Schrödinger-like wave equations derived from Dirac equation, for which the similarity transformation and asymptotic iteration algorithms can assist in generating second-order differential equation that admit general exact solutions in the presence of nonsymmetrical potential terms. For illustration purposes, we extract the exact bound-state solutions of the Dirac equation with the noncentral Hartmann potential for the cases of exact SU(2) spin and pseudospin symmetries. Also, we have shown that both Dirac-radial and Dirac-polar parts are sensitive to the variation of the involved parameters.

  2. Enhanced Protein Fold Prediction Method Through a Novel Feature Extraction Technique.

    PubMed

    Wei, Leyi; Liao, Minghong; Gao, Xing; Zou, Quan

    2015-09-01

    Information of protein 3-dimensional (3D) structures plays an essential role in molecular biology, cell biology, biomedicine, and drug design. Protein fold prediction is considered as an immediate step for deciphering the protein 3D structures. Therefore, protein fold prediction is one of fundamental problems in structural bioinformatics. Recently, numerous taxonomic methods have been developed for protein fold prediction. Unfortunately, the overall prediction accuracies achieved by existing taxonomic methods are not satisfactory although much progress has been made. To address this problem, we propose a novel taxonomic method, called PFPA, which is featured by combining a novel feature set through an ensemble classifier. Particularly, the sequential evolution information from the profiles of PSI-BLAST and the local and global secondary structure information from the profiles of PSI-PRED are combined to construct a comprehensive feature set. Experimental results demonstrate that PFPA outperforms the state-of-the-art predictors. To be specific, when tested on the independent testing set of a benchmark dataset, PFPA achieves an overall accuracy of 73.6%, which is the leading accuracy ever reported. Moreover, PFPA performs well without significant performance degradation on three updated large-scale datasets, indicating the robustness and generalization of PFPA. Currently, a webserver that implements PFPA is freely available on http://121.192.180.204:8080/PFPA/Index.html. PMID:26335556

  3. An ultra low power feature extraction and classification system for wearable seizure detection.

    PubMed

    Page, Adam; Pramod Tim Oates, Siddharth; Mohsenin, Tinoosh

    2015-08-01

    In this paper we explore the use of a variety of machine learning algorithms for designing a reliable and low-power, multi-channel EEG feature extractor and classifier for predicting seizures from electroencephalographic data (scalp EEG). Different machine learning classifiers including k-nearest neighbor, support vector machines, naïve Bayes, logistic regression, and neural networks are explored with the goal of maximizing detection accuracy while minimizing power, area, and latency. The input to each machine learning classifier is a 198 feature vector containing 9 features for each of the 22 EEG channels obtained over 1-second windows. All classifiers were able to obtain F1 scores over 80% and onset sensitivity of 100% when tested on 10 patients. Among five different classifiers that were explored, logistic regression (LR) proved to have minimum hardware complexity while providing average F-1 score of 91%. Both ASIC and FPGA implementations of logistic regression are presented and show the smallest area, power consumption, and the lowest latency when compared to the previous work. PMID:26737931

  4. Content-based image retrieval using features extracted from halftoning-based block truncation coding.

    PubMed

    Guo, Jing-Ming; Prasetyo, Heri

    2015-03-01

    This paper presents a technique for content-based image retrieval (CBIR) by exploiting the advantage of low-complexity ordered-dither block truncation coding (ODBTC) for the generation of image content descriptor. In the encoding step, ODBTC compresses an image block into corresponding quantizers and bitmap image. Two image features are proposed to index an image, namely, color co-occurrence feature (CCF) and bit pattern features (BPF), which are generated directly from the ODBTC encoded data streams without performing the decoding process. The CCF and BPF of an image are simply derived from the two ODBTC quantizers and bitmap, respectively, by involving the visual codebook. Experimental results show that the proposed method is superior to the block truncation coding image retrieval systems and the other earlier methods, and thus prove that the ODBTC scheme is not only suited for image compression, because of its simplicity, but also offers a simple and effective descriptor to index images in CBIR system. PMID:25420264

  5. Model based and model free methods for features extraction to recognize gait using fast wavelet network classifier

    NASA Astrophysics Data System (ADS)

    Dorgham, Aycha; Bouchrika, Tahani; Zaied, Mourad

    2015-12-01

    Human gait is an attractive modality for recognizing people at a distance. Gait recognition systems aims to identify people by studying their manner of walking. In this paper, we contribute by the creation of a new approach for gait recognition based on fast wavelet network (FWN) classifier. To guaranty the effectiveness of our gait recognizer, we have employed both static and dynamic gait characteristics. So, to extract the static features (dimension of the body part), model based method was employed. Thus, for the dynamic features (silhouette appearance and motion), model free method was used. The combination of these two methods aims at strengthens the WN classification results. Experimental results employing universal datasets show that our new gait recognizer performs better than already established ones.

  6. Fourier-based shape feature extraction technique for computer-aided B-Mode ultrasound diagnosis of breast tumor.

    PubMed

    Lee, Jong-Ha; Seong, Yeong Kyeong; Chang, Chu-Ho; Park, Jinman; Park, Moonho; Woo, Kyoung-Gu; Ko, Eun Young

    2012-01-01

    Early detection of breast tumor is critical in determining the best possible treatment approach. Due to its superiority compared with mammography in its possibility to detect lesions in dense breast tissue, ultrasound imaging has become an important modality in breast tumor detection and classification. This paper discusses the novel Fourier-based shape feature extraction techniques that provide enhanced classification accuracy for breast tumor in the computer-aided B-mode ultrasound diagnosis system. To demonstrate the effectiveness of the proposed method, experiments were performed using 4,107 ultrasound images with 2,508 malignancy cases. Experimental results show that the breast tumor classification accuracy of the proposed technique was 15.8%, 5.43%, 17.32%, and 13.86% higher than the previous shape features such as number of protuberances, number of depressions, lobulation index, and dissimilarity, respectively. PMID:23367430

  7. Combining Spectral and Texture Features Using Random Forest Algorithm: Extracting Impervious Surface Area in Wuhan

    NASA Astrophysics Data System (ADS)

    Shao, Zhenfeng; Zhang, Yuan; Zhang, Lei; Song, Yang; Peng, Minjun

    2016-06-01

    Impervious surface area (ISA) is one of the most important indicators of urban environments. At present, based on multi-resolution remote sensing images, numerous approaches have been proposed to extract impervious surface, using statistical estimation, sub-pixel classification and spectral mixture analysis method of sub-pixel analysis. Through these methods, impervious surfaces can be effectively applied to regional-scale planning and management. However, for the large scale region, high resolution remote sensing images can provide more details, and therefore they will be more conducive to analysis environmental monitoring and urban management. Since the purpose of this study is to map impervious surfaces more effectively, three classification algorithms (random forests, decision trees, and artificial neural networks) were tested for their ability to map impervious surface. Random forests outperformed the decision trees, and artificial neural networks in precision. Combining the spectral indices and texture, random forests is applied to impervious surface extraction with a producer's accuracy of 0.98, a user's accuracy of 0.97, and an overall accuracy of 0.98 and a kappa coefficient of 0.97.

  8. Spectral Morphology for Feature Extraction from Multi- and Hyper-spectral Imagery.

    SciTech Connect

    Harvey, N. R.; Porter, R. B.

    2005-01-01

    For accurate and robust analysis of remotely-sensed imagery it is necessary to combine the information from both spectral and spatial domains in a meaningful manner. The two domains are intimately linked: objects in a scene are defined in terms of both their composition and their spatial arrangement, and cannot accurately be described by information from either of these two domains on their own. To date there have been relatively few methods for combining spectral and spatial information concurrently. Most techniques involve separate processing for extracting spatial and spectral information. In this paper we will describe several extensions to traditional morphological operators that can treat spectral and spatial domains concurrently and can be used to extract relationships between these domains in a meaningful way. This includes the investgation and development of suitable vector-ordering metrics and machine-learning-based techniques for optimizing the various parameters of the morphological operators, such as morphological operator, structuring element and vector ordering metric. We demonstrate their application to a range of multi- and hyper-spectral image analysis problems.

  9. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    NASA Astrophysics Data System (ADS)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  10. Classification of focal liver lesions on ultrasound images by extracting hybrid textural features and using an artificial neural network.

    PubMed

    Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min

    2015-01-01

    This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically. PMID:26405925

  11. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains.

    PubMed

    Gonçalves, Ariadne Barbosa; Souza, Junior Silva; Silva, Gercina Gonçalves da; Cereda, Marney Pascoli; Pott, Arnildo; Naka, Marco Hiroshi; Pistori, Hemerson

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  12. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains

    PubMed Central

    Souza, Junior Silva; da Silva, Gercina Gonçalves

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  13. A wavelet transform based feature extraction and classification of cardiac disorder.

    PubMed

    Sumathi, S; Beaulah, H Lilly; Vanithamani, R

    2014-09-01

    This paper approaches an intellectual diagnosis system using hybrid approach of Adaptive Neuro-Fuzzy Inference System (ANFIS) model for classification of Electrocardiogram (ECG) signals. This method is based on using Symlet Wavelet Transform for analyzing the ECG signals and extracting the parameters related to dangerous cardiac arrhythmias. In these particular parameters were used as input of ANFIS classifier, five most important types of ECG signals they are Normal Sinus Rhythm (NSR), Atrial Fibrillation (AF), Pre-Ventricular Contraction (PVC), Ventricular Fibrillation (VF), and Ventricular Flutter (VFLU) Myocardial Ischemia. The inclusion of ANFIS in the complex investigating algorithms yields very interesting recognition and classification capabilities across a broad spectrum of biomedical engineering. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies. The results give importance to that the proposed ANFIS model illustrates potential advantage in classifying the ECG signals. The classification accuracy of 98.24 % is achieved. PMID:25023652

  14. Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fernando

    2004-01-01

    The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.

  15. Anthocyanin characterization, total phenolic quantification and antioxidant features of some Chilean edible berry extracts.

    PubMed

    Brito, Anghel; Areche, Carlos; Sepúlveda, Beatriz; Kennelly, Edward J; Simirgiotis, Mario J

    2014-01-01

    The anthocyanin composition and HPLC fingerprints of six small berries endemic of the VIII region of Chile were investigated using high resolution mass analysis for the first time (HR-ToF-ESI-MS). The antioxidant features of the six endemic species were compared, including a variety of blueberries which is one of the most commercially significant berry crops in Chile. The anthocyanin fingerprints obtained for the fruits were compared and correlated with the antioxidant features measured by the bleaching of the DPPH radical, the ferric reducing antioxidant power (FRAP), the superoxide anion scavenging activity assay (SA), and total content of phenolics, flavonoids and anthocyanins measured by spectroscopic methods. Thirty one anthocyanins were identified, and the major ones were quantified by HPLC-DAD, mostly branched 3-O-glycosides of delphinidin, cyanidin, petunidin, peonidin and malvidin. Three phenolic acids (feruloylquinic acid, chlorogenic acid, and neochlorogenic acid) and five flavonols (hyperoside, isoquercitrin, quercetin, rutin, myricetin and isorhamnetin) were also identified. Calafate fruits showed the highest antioxidant activity (2.33 ± 0.21 μg/mL in the DPPH assay), followed by blueberry (3.32 ± 0.18 μg/mL), and arrayán (5.88 ± 0.21), respectively. PMID:25072199

  16. The effect of recording site on extracted features of motor unit action potential.

    PubMed

    Artuğ, N Tuğrul; Goker, Imran; Bolat, Bülent; Osman, Onur; Kocasoy Orhan, Elif; Baslo, M Baris

    2016-06-01

    Motor unit action potential (MUAP), which consists of individual muscle fiber action potentials (MFAPs), represents the electrical activity of the motor unit. The values of the MUAP features are changed by denervation and reinnervation in neurogenic involvement as well as muscle fiber loss with increased diameter variability in myopathic diseases. The present study is designed to investigate how increased muscle fiber diameter variability affects MUAP parameters in simulated motor units. In order to detect this variation, simulated MUAPs were calculated both at the innervation zone where the MFAPs are more synchronized, and near the tendon, where they show increased temporal dispersion. Reinnervation in neurogenic state increases MUAP amplitude for the recordings at both the innervation zone and near the tendon. However, MUAP duration and the number of peaks significantly increased in a case of myopathy for recordings near the tendon. Furthermore, of the new features, "number of peaks×spike duration" was found as the strongest indicator of MFAP dispersion in myopathy. MUAPs were also recorded from healthy participants in order to investigate the biological counterpart of the simulation data. MUAPs which were recorded near to tendon revealed significantly prolonged duration and decreased amplitude. Although the number of peaks was increased by moving the needle near to tendon, this was not significant. PMID:26817404

  17. Feature extraction and pattern classification for remotely sensed data analysis by a modular neural system

    NASA Astrophysics Data System (ADS)

    Blonda, Palma N.; la Forgia, Vincenza; Pasquariello, Guido; Satalino, Giuseppe

    1994-12-01

    In this paper a modular neural network architecture is proposed for classification of Remote Sensed data. The neural network learning task of the supervised Multi Layer Perceptron (MLP) Classifier has been made more efficient by pre-processing the input with an unsupervised feature discovery neural module. Two classification experiments have been carried for coping with two different situations, very usual in real remote sensing applications: the availability of complex data, such as high dimensional and multisourced data, and on the contrary, the case of imperfect low dimensional data set, with a limited number of samples. In the first experiment on a multitemporal data set, the Linear Propagation Network (LPN) has been introduced to evaluate the effectiveness of neural data compression stage before classification. In the second experiment on a poor data set, the Kohonen Self Organising Feature Map (SOM) Network has been introduced for clustering data before labelling. In the paper is also illustrated the criterion for the selection of an optimal number of cluster centres to be used as node number of the output SOM layer. The results of the two experiments have confirmed that modular learning performs better than the non-modular one in learning quality and speed.

  18. An Efficient Feature Extraction Method with Pseudo-Zernike Moment in RBF Neural Network-Based Human Face Recognition System

    NASA Astrophysics Data System (ADS)

    Haddadnia, Javad; Ahmadi, Majid; Faez, Karim

    2003-12-01

    This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF) neural network with a hybrid learning algorithm (HLA) has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT) is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI) with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR) of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL) indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.

  19. Rapid extraction of relative topography from Viking orbiter images. 2: Application to irregular topographic features

    NASA Technical Reports Server (NTRS)

    Davis, P. A.; Soderblom, L. A.

    1984-01-01

    The ratio and flat field photoclinometric methods for determining crater form topography are described. Both methods compensate for the effects of atmospheric scattering by subtracting a haze value from all brightness values. Algorithms were altered to derive relative topographic data for irregular features such as ejecta blankets, lava flows, graben and ridge scarps, dune forms, and stratified materials. After the elevations along the profiles are obtained by integration of the photometric function, a matrix transformation is applied to the image coordinates of each pixel within each profile, utilizing each pixel's integral height, to produce a projection of each profile line onto the surface. Pixel brightness values are then resampled along the projected track of each profile to determine a more correct height value for each pixel. Precision of the methods is discussed.

  20. Hyperspectral Feature Detection Onboard the Earth Observing One Spacecraft using Superpixel Segmentation and Endmember Extraction

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Bornstein, Benjamin; Bue, Brian D.; Tran, Daniel Q.; Chien, Steve A.; Castano, Rebecca

    2012-01-01

    We present a demonstration of onboard hyperspectral image processing with the potential to reduce mission downlink requirements. The system detects spectral endmembers and then uses them to map units of surface material. This summarizes the content of the scene, reveals spectral anomalies warranting fast response, and reduces data volume by two orders of magnitude. We have integrated this system into the Autonomous Science craft Experiment for operational use onboard the Earth Observing One (EO-1) Spacecraft. The system does not require prior knowledge about spectra of interest. We report on a series of trial overflights in which identical spacecraft commands are effective for autonomous spectral discovery and mapping for varied target features, scenes and imaging conditions.

  1. Determination of optimal wavelet denoising parameters for red edge feature extraction from hyperspectral data

    NASA Astrophysics Data System (ADS)

    Shafri, Helmi Z. M.; Yusof, Mohd R. M.

    2009-05-01

    A study of wavelet denoising on hyperspectral reflectance data, specifically the red edge position (REP) and its first derivative is presented in this paper. A synthetic data set was created using a sigmoid to simulate the red edge feature for this study. The sigmoid is injected with Gaussian white noise to simulate noisy reflectance data from handheld spectroradiometers. The use of synthetic data enables better quantification and statistical study of the effects of wavelet denoising on the features of hyperspectral data, specifically the REP. The simulation study will help to identify the most suitable wavelet parameters for denoising and represents the applicability of the wavelet-based denoising procedure in hyperspectral sensing for vegetation. The suitability of the thresholding rules and mother wavelets used in wavelet denoising is evaluated by comparing the denoised sigmoid function with the clean sigmoid, in terms of the shift in the inflection point meant to represent the REP, and also the overall change in the denoised signal compared with the clean one. The VisuShrink soft threshold was used with rescaling based on the noise estimate, in conjunction with wavelets of the Daubechies, Symlet and Coiflet families. It was found that for the VisuShrink threshold with single level noise estimate rescaling, the Daubechies 9 and Symlet 8 wavelets produced the least distortion in the location of sigmoid inflection point and the overall curve. The selected mother wavelets were used to denoise oil palm reflectance data to enable determination of the red edge position by locating the peak of the first derivative.

  2. Multisensor-based real-time quality monitoring by means of feature extraction, selection and modeling for Al alloy in arc welding

    NASA Astrophysics Data System (ADS)

    Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben

    2015-08-01

    Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.

  3. Structural features and antitumor activity of a purified polysaccharide extracted from Sargassum horneri.

    PubMed

    Shao, Ping; Liu, Jia; Chen, Xiaoxiao; Fang, Zhongxiang; Sun, Peilong

    2015-02-01

    A polysaccharide fraction (SHPSA) was obtained from Sargassum horneri by hot-water extraction and sequential purification of anion-exchange chromatography and gel-filtration chromatography. SHPSA was found to be a neutral polysaccharide fraction with an average molecular weight of 5.78×10(5) Da and composed of T-D-Glcp, 1,3-D-Glcp, 1,6-D-Glcp and 1,3,6-D-Glcp in a molar percentage of 1.00:4.17:1.17:0.89, respectively. Based on the results from chemical analysis, NMR, and SHPSA was determined to be a glucan with β-(1→6) side chains linked to a β-(1→3) backbone with relatively few branch points. Moreover, SHPSA could inhibit the growth of human colon cancer DLD cells in a dose-dependent manner by inducing the apoptosis of DLD cells. So, SHPSA was promising for future use as a natural antitumor agent. PMID:25450044

  4. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  5. Shape feature extraction and pattern recognition of sand particles and their impact

    NASA Astrophysics Data System (ADS)

    Shrestha, Bim P.; Suman, Sandip K.

    2005-11-01

    Sand deposition is the major problem of Nepalese rivers and it causes substantial impact to different sectors including hydropower generation, natural resource management, and many others. Due to the typical nature of soil and sand of Nepalese mountains it has almost become impossible to predict and manage the upcoming natural disasters and hazards. Sand deposition in rivers affect landslides, aquatic life of rives, environmental disorders and many others. Sedimentation causes not only disasters but also reduces the overall efficiency of hydropower generation units as well. A systematic approach to the problem has been identified in this work. Sand particles are collected from the erosion sensitive power plants and its digital images have been acquired. Software has been developed on MATLAB 6.5 platform to extract the exact shape of sand particles collected. These shapes have further been analyzed by artificial neural network. This network has been first trained for the known input and known output. After that it is trained for unknown input and known output. Finally these networks can recognize any shape given to it and gives the shape which is nearest to the seven predefined shape. The software is trained for seven types of shapes with shape number 1 to 7 in increasing number of sharp edges. The shape with shape number seven is having large number of sharp edges and considered as most erosive where as shape with shape number one is having round edges and considered as least erosive.

  6. On the use of wavelet for extracting feature patterns from Multitemporal google earth satellite data sets

    NASA Astrophysics Data System (ADS)

    Lasaponara, R.

    2012-04-01

    The great amount of multispectral VHR satellite images, even available free of charge in Google earth has opened new strategic challenges in the field of remote sensing for archaeological studies. These challenges substantially deal with: (i) the strategic exploitation of satellite data as much as possible, (ii) the setting up of effective and reliable automatic and/or semiautomatic data processing strategies and (iii) the integration with other data sources from documentary resources to the traditional ground survey, historical documentation, geophysical prospection, etc. VHR satellites provide high resolution data which can improve knowledge on past human activities providing precious qualitative and quantitative information developed to such an extent that currently they share many of the physical characteristics of aerial imagery. This makes them ideal for investigations ranging from a local to a regional scale (see. for example, Lasaponara and Masini 2006a,b, 2007a, 2011; Masini and Lasaponara 2006, 2007, Sparavigna, 2010). Moreover, satellite data are still the only data source for research performed in areas where aerial photography is restricted because of military or political reasons. Among the main advantages of using satellite remote sensing compared to traditional field archaeology herein we briefly focalize on the use of wavelet data processing for enhancing google earth satellite data with particular reference to multitemporal datasets. Study areas selected from Southern Italy, Middle East and South America are presented and discussed. Results obtained point out the use of automatic image enhancement can successfully applied as first step of supervised classification and intelligent data analysis for semiautomatic identification of features of archaeological interest. Reference Lasaponara R, Masini N (2006a) On the potential of panchromatic and multispectral Quickbird data for archaeological prospection. Int J Remote Sens 27: 3607-3614. Lasaponara R

  7. Vegetation Feature Extraction from ALSM Measurements for Evaluation of GPS Signal Strength in Forested Area

    NASA Astrophysics Data System (ADS)

    Liu, P.; Lee, H.; Slatton, K. C.

    2009-12-01

    Evaluation of GPS signal strength before data collection in forested area is of particular interest to surveyors and forestry researchers. However, it is difficult to predict performance of GPS positioning because the signal propagated within vegetation may suffer attenuation and furthermore exacerbate the signal reception and positioning accuracy. The conventional wisdom is that the forest is like a uniformly horizontal slab, as shown in traditional slab models, so the signal attenuation is dependent on elevation angle (θ). In fact, elevation dominates the global scale signal attenuation, but still large differences of signal attenuation can be observed from varied azimuths (φ) at the same elevation angle while considering on local scale. 2D skyward photography methods were exploited to compute the relative canopy closure and attempted to relate the GPS signal strength to the open sky fraction. Although it could consider the directional elements (θ, φ), a lack of 3D profile forest information is inefficient to estimate the real vegetative interference within the signal transmitted path. ALSM system is capable of collecting high resolution 3D spatial information from a region of interest with high laser pulse and scan rates. Due to its multiple return characteristic with dense horizontal coverage, the vegetation structure is relatively well captured. This study presents a development of directional 3D structuring elements to segment ALSM points and relate to signal attenuation. The attenuation of GPS signals is determined by mapping between signal-to-noise ratio (SNR) of received GPS signals under canopies and the directional canopy features derived from ALSM observations. Firstly, a directional cylindrical scope function is constructed to segment the obstructions which interfere with the signal propagation between the satellite vehicle and the receiver, and a vegetation feature, called the directional vegetation path length (DVPL), is computed by measuring the

  8. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  9. Rapid discrimination and feature extraction of three Chamaecyparis species by static-HS/GC-MS.

    PubMed

    Chen, Ying-Ju; Lin, Chun-Ya; Cheng, Sen-Sung; Chang, Shang-Tzen

    2015-01-28

    This study aimed to develop a rapid and accurate analytical method for discriminating three Chamaecyparis species (C. formosensis, C. obtusa, and C. obtusa var. formosana) that could not be easily distinguished by volatile compounds. A total of 23 leaf samples from three species were analyzed by static-headspace (static-HS) coupled with gas chromatography-mass spectrometry (GC-MS). The static-HS procedure, whose experimental parameters were properly optimized, yielded a high Pearson correlation-based similarity between essential oil and VOC composition (r = 0.555-0.999). Thirty-six major constituents were identified; along with the results of cluster analysis (CA), a large variation in contents among the three different species was observed. Principal component analysis (PCA) methods illustrated graphically the relationships between characteristic components and tree species. It was clearly demonstrated that the static-HS-based procedure enhanced greatly the speed of precise analysis of chemical fingerprint in small sample amounts, thus providing a fast and reliable tool for the prediction of constituent characteristics in essential oil, and also offering good opportunities for studying the role of these feature compounds in chemotaxonomy or ecophysiology. PMID:25590241

  10. Real-time vision system using multiple-feature extraction for the Toro robot

    NASA Astrophysics Data System (ADS)

    Climent, Joan; Grau, Antoni

    1994-08-01

    This paper presents a vision system developed for the guidance of a mobile robot. This robot emulates the behavior of a bull in a corrida. A camera is placed at the front of the robot, this means, for the first time, we look at the scene from the point of view of the bull. Due to the difficulty in understanding how bull vision works our model is restricted to the most known features about bull behavior in front of corrida scenes. For this reason, we restrict the bull vision to two different topics: high sensitivity to red color, and constant attention to moving objects. We emulate this model using two image processors. The first performs color segmentation, the second performs motion segmentation. The information obtained is used as feedbacks of the mobile robot. Our approach for view planning is to first simplify the 3-D decision making problem into a 2-D problem. The architecture of our real-time vision system and its implementation are described in detail.

  11. Segmentation and feature extraction of cervical spine x-ray images

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1999-05-01

    As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.

  12. A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction

    PubMed Central

    Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording

  13. Data mining framework for identification of myocardial infarction stages in ultrasound: A hybrid feature extraction paradigm (PART 2).

    PubMed

    Sudarshan, Vidya K; Acharya, U Rajendra; Ng, E Y K; Tan, Ru San; Chou, Siaw Meng; Ghista, Dhanjoo N

    2016-04-01

    Early expansion of infarcted zone after Acute Myocardial Infarction (AMI) has serious short and long-term consequences and contributes to increased mortality. Thus, identification of moderate and severe phases of AMI before leading to other catastrophic post-MI medical condition is most important for aggressive treatment and management. Advanced image processing techniques together with robust classifier using two-dimensional (2D) echocardiograms may aid for automated classification of the extent of infarcted myocardium. Therefore, this paper proposes novel algorithms namely Curvelet Transform (CT) and Local Configuration Pattern (LCP) for an automated detection of normal, moderately infarcted and severely infarcted myocardium using 2D echocardiograms. The methodology extracts the LCP features from CT coefficients of echocardiograms. The obtained features are subjected to Marginal Fisher Analysis (MFA) dimensionality reduction technique followed by fuzzy entropy based ranking method. Different classifiers are used to differentiate ranked features into three classes normal, moderate and severely infarcted based on the extent of damage to myocardium. The developed algorithm has achieved an accuracy of 98.99%, sensitivity of 98.48% and specificity of 100% for Support Vector Machine (SVM) classifier using only six features. Furthermore, we have developed an integrated index called Myocardial Infarction Risk Index (MIRI) to detect the normal, moderately and severely infarcted myocardium using a single number. The proposed system may aid the clinicians in faster identification and quantification of the extent of infarcted myocardium using 2D echocardiogram. This system may also aid in identifying the person at risk of developing heart failure based on the extent of infarcted myocardium. PMID:26897481

  14. Sensors Fusion based Online Mapping and Features Extraction of Mobile Robot in the Road Following and Roundabout

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Mailah, Musa; Yussof, Wan Azhar B.; Hamedon, Zamzuri B.; Yussof, Zulkifli B.; Majeed, Anwar P. P.

    2016-02-01

    A road feature extraction based mapping system using a sensor fusion technique for mobile robot navigation in road environments is presented in this paper. The online mapping of mobile robot is performed continuously in the road environments to find the road properties that enable the robot to move from a certain start position to pre-determined goal while discovering and detecting the roundabout. The sensors fusion involving laser range finder, camera and odometry which are installed in a new platform, are used to find the path of the robot and localize it within its environments. The local maps are developed using camera and laser range finder to recognize the roads borders parameters such as road width, curbs and roundabout. Results show the capability of the robot with the proposed algorithms to effectively identify the road environments and build a local mapping for road following and roundabout.

  15. An Integrated Front-End Readout And Feature Extraction System for the BaBar Drift Chamber

    SciTech Connect

    Zhang, Jinlong; /Colorado U.

    2006-08-10

    The BABAR experiment has been operating at SLAC's PEP-II asymmetric B-Factory since 1999. The accelerator has achieved more than three times its original design luminosity of 3 x 10{sup 33} cm{sup -2} s{sup -1}, with plans for an additional factor of three in the next two years. To meet the experiment's performance requirements in the face of significantly higher trigger and background rates, the drift chamber's front-end readout system has been redesigned around the Xilinx Spartan 3 FPGA. The new system implements analysis and feature-extraction of digitized waveforms in the front-end, reducing the data bandwidth required by a factor of four.

  16. Comparison of sEMG-Based Feature Extraction and Motion Classification Methods for Upper-Limb Movement

    PubMed Central

    Guo, Shuxiang; Pang, Muye; Gao, Baofeng; Hirata, Hideyuki; Ishihara, Hidenori

    2015-01-01

    The surface electromyography (sEMG) technique is proposed for muscle activation detection and intuitive control of prostheses or robot arms. Motion recognition is widely used to map sEMG signals to the target motions. One of the main factors preventing the implementation of this kind of method for real-time applications is the unsatisfactory motion recognition rate and time consumption. The purpose of this paper is to compare eight combinations of four feature extraction methods (Root Mean Square (RMS), Detrended Fluctuation Analysis (DFA), Weight Peaks (WP), and Muscular Model (MM)) and two classifiers (Neural Networks (NN) and Support Vector Machine (SVM)), for the task of mapping sEMG signals to eight upper-limb motions, to find out the relation between these methods and propose a proper combination to solve this issue. Seven subjects participated in the experiment and six muscles of the upper-limb were selected to record sEMG signals. The experimental results showed that NN classifier obtained the highest recognition accuracy rate (88.7%) during the training process while SVM performed better in real-time experiments (85.9%). For time consumption, SVM took less time than NN during the training process but needed more time for real-time computation. Among the four feature extraction methods, WP had the highest recognition rate for the training process (97.7%) while MM performed the best during real-time tests (94.3%). The combination of MM and NN is recommended for strict real-time applications while a combination of MM and SVM will be more suitable when time consumption is not a key requirement. PMID:25894941

  17. Identification of lesion images from gastrointestinal endoscope based on feature extraction of combinational methods with and without learning process.

    PubMed

    Liu, Ding-Yun; Gan, Tao; Rao, Ni-Ni; Xing, Yao-Wen; Zheng, Jie; Li, Sang; Luo, Cheng-Si; Zhou, Zhong-Jun; Wan, Yong-Li

    2016-08-01

    The gastrointestinal endoscopy in this study refers to conventional gastroscopy and wireless capsule endoscopy (WCE). Both of these techniques produce a large number of images in each diagnosis. The lesion detection done by hand from the images above is time consuming and inaccurate. This study designed a new computer-aided method to detect lesion images. We initially designed an algorithm named joint diagonalisation principal component analysis (JDPCA), in which there are no approximation, iteration or inverting procedures. Thus, JDPCA has a low computational complexity and is suitable for dimension reduction of the gastrointestinal endoscopic images. Then, a novel image feature extraction method was established through combining the algorithm of machine learning based on JDPCA and conventional feature extraction algorithm without learning. Finally, a new computer-aided method is proposed to identify the gastrointestinal endoscopic images containing lesions. The clinical data of gastroscopic images and WCE images containing the lesions of early upper digestive tract cancer and small intestinal bleeding, which consist of 1330 images from 291 patients totally, were used to confirm the validation of the proposed method. The experimental results shows that, for the detection of early oesophageal cancer images, early gastric cancer images and small intestinal bleeding images, the mean values of accuracy of the proposed method were 90.75%, 90.75% and 94.34%, with the standard deviations (SDs) of 0.0426, 0.0334 and 0.0235, respectively. The areas under the curves (AUCs) were 0.9471, 0.9532 and 0.9776, with the SDs of 0.0296, 0.0285 and 0.0172, respectively. Compared with the traditional related methods, our method showed a better performance. It may therefore provide worthwhile guidance for improving the efficiency and accuracy of gastrointestinal disease diagnosis and is a good prospect for clinical application. PMID:27236223

  18. PET(CO2) measurement and feature extraction of capnogram signals for extubation outcomes from mechanical ventilation.

    PubMed

    Rasera, Carmen C; Gewehr, Pedro M; Domingues, Adriana Maria T

    2015-02-01

    Capnography is a continuous and noninvasive method for carbon dioxide (CO2) measurement, and it has become the standard of care for basic respiratory monitoring for intubated patients in the intensive care unit. In addition, it has been used to adjust ventilatory parameters during mechanical ventilation (MV). However, a substantial debate remains as to whether capnography is useful during the process of weaning and extubation from MV during the postoperative period. Thus, the main objective of this study was to present a new use for time-based capnography data by measuring the end-tidal CO2 pressure ([Formula: see text]), partial pressure of arterial CO2 ([Formula: see text]) and feature extraction of capnogram signals before extubation from MV to evaluate the capnography as a predictor of outcome extubation in infants after cardiac surgery. Altogether, 82 measurements were analysed, 71.9% patients were successfully extubated, and 28.1% met the criteria for extubation failure within 48 h. The ROC-AUC analysis for quantitative measure of the capnogram showed significant differences (p < 0.001) for: expiratory time (0.873), slope of phase III (0.866), slope ratio (0.923) and ascending angle (0.897). In addition, the analysis of [Formula: see text] (0.895) and [Formula: see text] (0.924) obtained 30 min before extubation showed significant differences between groups. The [Formula: see text] mean value for success and failure extubation group was 39.04 mmHg and 46.27 mmHg, respectively. It was also observed that high CO2 values in patients who had returned MV was 82.8  ±  21 mmHg at the time of extubation failure. Thus, [Formula: see text] measurements and analysis of features extracted from a capnogram can differentiate extubation outcomes in infant patients under MV, thereby reducing the physiologic instability caused by failure in this process. PMID:25582400

  19. Comparison of sEMG-Based Feature Extraction and Motion Classification Methods for Upper-Limb Movement.

    PubMed

    Guo, Shuxiang; Pang, Muye; Gao, Baofeng; Hirata, Hideyuki; Ishihara, Hidenori

    2015-01-01

    The surface electromyography (sEMG) technique is proposed for muscle activation detection and intuitive control of prostheses or robot arms. Motion recognition is widely used to map sEMG signals to the target motions. One of the main factors preventing the implementation of this kind of method for real-time applications is the unsatisfactory motion recognition rate and time consumption. The purpose of this paper is to compare eight combinations of four feature extraction methods (Root Mean Square (RMS), Detrended Fluctuation Analysis (DFA), Weight Peaks (WP), and Muscular Model (MM)) and two classifiers (Neural Networks (NN) and Support Vector Machine (SVM)), for the task of mapping sEMG signals to eight upper-limb motions, to find out the relation between these methods and propose a proper combination to solve this issue. Seven subjects participated in the experiment and six muscles of the upper-limb were selected to record sEMG signals. The experimental results showed that NN classifier obtained the highest recognition accuracy rate (88.7%) during the training process while SVM performed better in real-time experiments (85.9%). For time consumption, SVM took less time than NN during the training process but needed more time for real-time computation. Among the four feature extraction methods, WP had the highest recognition rate for the training process (97.7%) while MM performed the best during real-time tests (94.3%). The combination of MM and NN is recommended for strict real-time applications while a combination of MM and SVM will be more suitable when time consumption is not a key requirement. PMID:25894941

  20. Relative brain signature: a population-based feature extraction procedure to identify functional biomarkers in the brain of alcoholics

    PubMed Central

    Karamzadeh, Nader; Ardeshirpour, Yasaman; Kellman, Matthew; Chowdhry, Fatima; Anderson, Afrouz; Chorlian, David; Wegman, Edward; Gandjbakhche, Amir

    2015-01-01

    Background A novel feature extraction technique, Relative-Brain-Signature (RBS), which characterizes subjects' relationship to populations with distinctive neuronal activity, is presented. The proposed method transforms a set of Electroencephalography's (EEG) time series in high dimensional space to a space of fewer dimensions by projecting time series onto orthogonal subspaces. Methods We apply our technique to an EEG data set of 77 abstinent alcoholics and 43 control subjects. To characterize subjects' relationship to the alcoholic and control populations, one RBS vector with respect to the alcoholic and one with respect to the control population is constructed. We used the extracted RBS vectors to identify functional biomarkers over the brain of alcoholics. To achieve this goal, the classification algorithm was used to categorize subjects into alcoholics and controls, which resulted in 78% accuracy. Results and Conclusions Using the results of the classification, regions with distinctive functionality in alcoholic subjects are detected. These affected regions, with respect to their spatial extent, are frontal, anterior frontal, centro-parietal, parieto-occiptal, and occipital lobes. The distribution of these regions over the scalp indicates that the impact of the alcohol in the cerebral cortex of the alcoholics is spatially diffuse. Our finding suggests that these regions engage more of the right hemisphere relative to the left hemisphere of the alcoholics' brain. PMID:26221569

  1. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data.

    PubMed

    Prieto, Sandra P; Lai, Keith K; Laryea, Jonathan A; Mizell, Jason S; Muldoon, Timothy J

    2016-04-01

    Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  2. Analyzing in situ gene expression in the mouse brain with image registration, feature extraction and block clustering

    PubMed Central

    Jagalur, Manjunatha; Pal, Chris; Learned-Miller, Erik; Zoeller, R Thomas; Kulp, David

    2007-01-01

    Background Many important high throughput projects use in situ hybridization and may require the analysis of images of spatial cross sections of organisms taken with cellular level resolution. Projects creating gene expression atlases at unprecedented scales for the embryonic fruit fly as well as the embryonic and adult mouse already involve the analysis of hundreds of thousands of high resolution experimental images mapping mRNA expression patterns. Challenges include accurate registration of highly deformed tissues, associating cells with known anatomical regions, and identifying groups of genes whose expression is coordinately regulated with respect to both concentration and spatial location. Solutions to these and other challenges will lead to a richer understanding of the complex system aspects of gene regulation in heterogeneous tissue. Results We present an end-to-end approach for processing raw in situ expression imagery and performing subsequent analysis. We use a non-linear, information theoretic based image registration technique specifically adapted for mapping expression images to anatomical annotations and a method for extracting expression information within an anatomical region. Our method consists of coarse registration, fine registration, and expression feature extraction steps. From this we obtain a matrix for expression characteristics with rows corresponding to genes and columns corresponding to anatomical sub-structures. We perform matrix block cluster analysis using a novel row-column mixture model and we relate clustered patterns to Gene Ontology (GO) annotations. Conclusion Resulting registrations suggest that our method is robust over intensity levels and shape variations in ISH imagery. Functional enrichment studies from both simple analysis and block clustering indicate that gene relationships consistent with biological knowledge of neuronal gene functions can be extracted from large ISH image databases such as the Allen Brain Atlas [1

  3. A novel non-linear recursive filter design for extracting high rate pulse features in nuclear medicine imaging and spectroscopy.

    PubMed

    Sajedi, Salar; Kamal Asl, Alireza; Ay, Mohammad R; Farahani, Mohammad H; Rahmim, Arman

    2013-06-01

    Applications in imaging and spectroscopy rely on pulse processing methods for appropriate data generation. Often, the particular method utilized does not highly impact data quality, whereas in some scenarios, such as in the presence of high count rates or high frequency pulses, this issue merits extra consideration. In the present study, a new approach for pulse processing in nuclear medicine imaging and spectroscopy is introduced and evaluated. The new non-linear recursive filter (NLRF) performs nonlinear processing of the input signal and extracts the main pulse characteristics, having the powerful ability to recover pulses that would ordinarily result in pulse pile-up. The filter design defines sampling frequencies lower than the Nyquist frequency. In the literature, for systems involving NaI(Tl) detectors and photomultiplier tubes (PMTs), with a signal bandwidth considered as 15 MHz, the sampling frequency should be at least 30 MHz (the Nyquist rate), whereas in the present work, a sampling rate of 3.3 MHz was shown to yield very promising results. This was obtained by exploiting the known shape feature instead of utilizing a general sampling algorithm. The simulation and experimental results show that the proposed filter enhances count rates in spectroscopy. With this filter, the system behaves almost identically as a general pulse detection system with a dead time considerably reduced to the new sampling time (300 ns). Furthermore, because of its unique feature for determining exact event times, the method could prove very useful in time-of-flight PET imaging. PMID:22964063

  4. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  5. An enhanced algorithm for knee joint sound classification using feature extraction based on time-frequency analysis.

    PubMed

    Kim, Keo Sik; Seo, Jeong Hwan; Kang, Jin U; Song, Chul Gyu

    2009-05-01

    Vibroarthrographic (VAG) signals, generated by human knee movement, are non-stationary and multi-component in nature and their time-frequency distribution (TFD) provides a powerful means to analyze such signals. The objective of this paper is to improve the classification accuracy of the features, obtained from the TFD of normal and abnormal VAG signals, using segmentation by the dynamic time warping (DTW) and denoising algorithm by the singular value decomposition (SVD). VAG and knee angle signals, recorded simultaneously during one flexion and one extension of the knee, were segmented and normalized at 0.5 Hz by the DTW method. Also, the noise within the TFD of the segmented VAG signals was reduced by the SVD algorithm, and a back-propagation neural network (BPNN) was used to classify the normal and abnormal VAG signals. The characteristic parameters of VAG signals consist of the energy, energy spread, frequency and frequency spread parameter extracted by the TFD. A total of 1408 segments (normal 1031, abnormal 377) were used for training and evaluating the BPNN. As a result, the average classification accuracy was 91.4 (standard deviation +/-1.7) %. The proposed method showed good potential for the non-invasive diagnosis and monitoring of joint disorders such as osteoarthritis. PMID:19217685

  6. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction

    PubMed Central

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-01-01

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures. PMID:26404305

  7. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature E