Science.gov

Sample records for acoustic feature extraction

  1. Feature extraction from time domain acoustic signatures of weapons systems fire

    NASA Astrophysics Data System (ADS)

    Yang, Christine; Goldman, Geoffrey H.

    2014-06-01

    The U.S. Army is interested in developing algorithms to classify weapons systems fire based on their acoustic signatures. To support this effort, an algorithm was developed to extract features from acoustic signatures of weapons systems fire and applied to over 1300 signatures. The algorithm filtered the data using standard techniques then estimated the amplitude and time of the first five peaks and troughs and the location of the zero crossing in the waveform. The results were stored in Excel spreadsheets. The results are being used to develop and test acoustic classifier algorithms.

  2. Extraction of features from ultrasound acoustic emissions: a tool to assess the hydraulic vulnerability of Norway spruce trunkwood?

    PubMed Central

    Rosner, Sabine; Klein, Andrea; Wimmer, Rupert; Karlsson, Bo

    2011-01-01

    Summary • The aim of this study was to assess the hydraulic vulnerability of Norway spruce (Picea abies) trunkwood by extraction of selected features of acoustic emissions (AEs) detected during dehydration of standard size samples. • The hydraulic method was used as the reference method to assess the hydraulic vulnerability of trunkwood of different cambial ages. Vulnerability curves were constructed by plotting the percentage loss of conductivity vs an overpressure of compressed air. • Differences in hydraulic vulnerability were very pronounced between juvenile and mature wood samples; therefore, useful AE features, such as peak amplitude, duration and relative energy, could be filtered out. The AE rates of signals clustered by amplitude and duration ranges and the AE energies differed greatly between juvenile and mature wood at identical relative water losses. • Vulnerability curves could be constructed by relating the cumulated amount of relative AE energy to the relative loss of water and to xylem tension. AE testing in combination with feature extraction offers a readily automated and easy to use alternative to the hydraulic method. PMID:16771986

  3. A Joint Feature Extraction and Data Compression Method for Low Bit Rate Transmission in Distributed Acoustic Sensor Environments

    DTIC Science & Technology

    2004-12-01

    target classification. In this Phase I research, a subband-based joint detection, feature extraction, data compression/ encoding system for low bit...well as data compression/ encoding without incurring degradation in the overall performance. New methods for formation of the optimal sparse sensor arrays...Features & Data Compression ............................... 4 2.3 Encoding Methods ......... ......................................... 5 3 A joint

  4. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    NASA Astrophysics Data System (ADS)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  5. Les Traits acoustiques (Acoustic Features)

    ERIC Educational Resources Information Center

    Rossi, Mario

    1977-01-01

    An analysis of the theory of distinctive features advanced by Roman Jakobson, Gunnar Fant and Morris Halle in "Preliminaries to Speech Analysis." The notion of binarism, the criterion of distinctiveness and the definition of features are discussed. Questions leading to further research are raised. (Text is in French.) (AMH)

  6. Fourier descriptor features for acoustic landmine detection

    NASA Astrophysics Data System (ADS)

    Keller, James M.; Cheng, Zhanqi; Gader, Paul D.; Hocaoglu, Ali K.

    2002-08-01

    Signatures of buried landmines are often difficult to separate from those of clutter objects. Often, shape information is not directly obtainable from the sensors used for landmine detection. The Acoustic Sensing Technology (AST), which uses a Laser Doppler Vibrometer (LDV) that measures the spatial pattern of particle velocity amplitude of the ground surface in a variety of frequency bands, offers a unique look at subsurface phenomena. It directly records shape related information. Generally, after preprocessing the frequency band images in a downward looking LDV system, landmines have fairly regular shapes (roughly circular) over a range of frequencies while clutter tends to exhibit irregular shapes different from those of landmines. Therefore, shape description has the potential to be used in discriminating mines from clutter. Normalized Fourier Descriptors (NFD) are shape parameters independent of size, angular orientation, position, and contour starting conditions. In this paper, the stack of 2D frequency images from the LDV system are preprocessed by a linear combination of order statistics (LOS) filter, thresholding, and 2D and 3D connected labeling. Contours are extracted form the connected components and aggregated to produce evenly spaced boundary points. Two types of Normalized Fourier Descriptors are computed from the outlines. Using images obtained from a standard data collection site, these features are analyzed for their ability to discriminate landmines from background and clutter such as wood and stones. From a standard feature selection procedure, it was found that a very small number of features are required to effectively separate landmines from background and clutter using simple pattern recognition algorithms. Details of the experiments are included.

  7. Adding articulatory features to acoustic features for automatic speech recognition

    SciTech Connect

    Zlokarnik, I.

    1995-05-01

    A hidden-Markov-model (HMM) based speech recognition system was evaluated that makes use of simultaneously recorded acoustic and articulatory data. The articulatory measurements were gathered by means of electromagnetic articulography and describe the movement of small coils fixed to the speakers` tongue and jaw during the production of German V{sub 1}CV{sub 2} sequences [P. Hoole and S. Gfoerer, J. Acoust. Soc. Am. Suppl. 1 {bold 87}, S123 (1990)]. Using the coordinates of the coil positions as an articulatory representation, acoustic and articulatory features were combined to make up an acoustic--articulatory feature vector. The discriminant power of this combined representation was evaluated for two subjects on a speaker-dependent isolated word recognition task. When the articulatory measurements were used both for training and testing the HMMs, the articulatory representation was capable of reducing the error rate of comparable acoustic-based HMMs by a relative percentage of more than 60%. In a separate experiment, the articulatory movements during the testing phase were estimated using a multilayer perceptron that performed an acoustic-to-articulatory mapping. Under these more realistic conditions, when articulatory measurements are only available during the training, the error rate could be reduced by a relative percentage of 18% to 25%.

  8. The acoustic features of human laughter

    NASA Astrophysics Data System (ADS)

    Bachorowski, Jo-Anne; Owren, Michael J.

    2002-05-01

    Remarkably little is known about the acoustic features of laughter, despite laughter's ubiquitous role in human vocal communication. Outcomes are described for 1024 naturally produced laugh bouts recorded from 97 young adults. Acoustic analysis focused on temporal characteristics, production modes, source- and filter-related effects, and indexical cues to laugher sex and individual identity. The results indicate that laughter is a remarkably complex vocal signal, with evident diversity in both production modes and fundamental frequency characteristics. Also of interest was finding a consistent lack of articulation effects in supralaryngeal filtering. Outcomes are compared to previously advanced hypotheses and conjectures about this species-typical vocal signal.

  9. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  10. Feature Extraction Without Edge Detection

    DTIC Science & Technology

    1993-09-01

    feature? A.I. Memo 1356, MIT Artificial Intellegence Lab, April 1992. [65] W. A. Richards, B. Dawson, and D. Whittington. Encoding contour shape by...AD-A279 842 . " Technical Report 1434 --Feature Extraction Without Edge Detection Ronald D. Chane MIT Artificial .Intelligencc Laboratory ",, 𔃾•d...Chaney 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Massachusetts Institute of Technology Artificial

  11. Information based universal feature extraction

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  12. Galaxy Classification without Feature Extraction

    NASA Astrophysics Data System (ADS)

    Polsterer, K. L.; Gieseke, F.; Kramer, O.

    2012-09-01

    The automatic classification of galaxies according to the different Hubble types is a widely studied problem in the field of astronomy. The complexity of this task led to projects like Galaxy Zoo which try to obtain labeled data based on visual inspection by humans. Many automatic classification frameworks are based on artificial neural networks (ANN) in combination with a feature extraction step in the pre-processing phase. These approaches rely on labeled catalogs for training the models. The small size of the typically used training sets, however, limits the generalization performance of the resulting models. In this work, we present a straightforward application of support vector machines (SVM) for this type of classification tasks. The conducted experiments indicate that using a sufficient number of labeled objects provided by the EFIGI catalog leads to high-quality models. In contrast to standard approaches no additional feature extraction is required.

  13. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  14. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  15. Wavelet Signal Processing for Transient Feature Extraction

    DTIC Science & Technology

    1992-03-15

    Research was conducted to evaluate the feasibility of applying Wavelets and Wavelet Transform methods to transient signal feature extraction problems... Wavelet transform techniques were developed to extract low dimensional feature data that allowed a simple classification scheme to easily separate

  16. Acoustic Green's function extraction in the ocean

    NASA Astrophysics Data System (ADS)

    Zang, Xiaoqin

    The acoustic Green's function (GF) is the key to understanding the acoustic properties of ocean environments. With knowledge of the acoustic GF, the physics of sound propagation, such as dispersion, can be analyzed; underwater communication over thousands of miles can be understood; physical properties of the ocean, including ocean temperature, ocean current speed, as well as seafloor bathymetry, can be investigated. Experimental methods of acoustic GF extraction can be categorized as active methods and passive methods. Active methods are based on employment of man-made sound sources. These active methods require less computational complexity and time, but may cause harm to marine mammals. Passive methods cost much less and do not harm marine mammals, but require more theoretical and computational work. Both methods have advantages and disadvantages that should be carefully tailored to fit the need of each specific environment and application. In this dissertation, we study one passive method, the noise interferometry method, and one active method, the inverse filter processing method, to achieve acoustic GF extraction in the ocean. The passive method of noise interferometry makes use of ambient noise to extract an approximation to the acoustic GF. In an environment with a diffusive distribution of sound sources, sound waves that pass through two hydrophones at two locations carry the information of the acoustic GF between these two locations; by listening to the long-term ambient noise signals and cross-correlating the noise data recorded at two locations, the acoustic GF emerges from the noise cross-correlation function (NCF); a coherent stack of many realizations of NCFs yields a good approximation to the acoustic GF between these two locations, with all the deterministic structures clearly exhibited in the waveform. To test the performance of noise interferometry in different types of ocean environments, two field experiments were performed and ambient noise

  17. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

    ... flag value. It is written in Interactive Data Language (IDL) as a callable procedure that receives as an argument a 16-bit ... Flag Extraction routine  (5 KB) Interactive Data Language (IDL) is available from  Exelis Visual Information Solutions . ...

  18. Excavation Equipment Recognition Based on Novel Acoustic Statistical Features.

    PubMed

    Cao, Jiuwen; Wang, Wei; Wang, Jianzhong; Wang, Ruirong

    2016-09-30

    Excavation equipment recognition attracts increasing attentions in recent years due to its significance in underground pipeline network protection and civil construction management. In this paper, a novel classification algorithm based on acoustics processing is proposed for four representative excavation equipments. New acoustic statistical features, namely, the short frame energy ratio, concentration of spectrum amplitude ratio, truncated energy range, and interval of pulse are first developed to characterize acoustic signals. Then, probability density distributions of these acoustic features are analyzed and a novel classifier is presented. Experiments on real recorded acoustics of the four excavation devices are conducted to demonstrate the effectiveness of the proposed algorithm. Comparisons with two popular machine learning methods, support vector machine and extreme learning machine, combined with the popular linear prediction cepstral coefficients are provided to show the generalization capability of our method. A real surveillance system using our algorithm is developed and installed in a metro construction site for real-time recognition performance validation.

  19. Acoustic features of objects matched by an echolocating bottlenose dolphin.

    PubMed

    Delong, Caroline M; Au, Whitlow W L; Lemonds, David W; Harley, Heidi E; Roitblat, Herbert L

    2006-03-01

    The focus of this study was to investigate how dolphins use acoustic features in returning echolocation signals to discriminate among objects. An echolocating dolphin performed a match-to-sample task with objects that varied in size, shape, material, and texture. After the task was completed, the features of the object echoes were measured (e.g., target strength, peak frequency). The dolphin's error patterns were examined in conjunction with the between-object variation in acoustic features to identify the acoustic features that the dolphin used to discriminate among the objects. The present study explored two hypotheses regarding the way dolphins use acoustic information in echoes: (1) use of a single feature, or (2) use of a linear combination of multiple features. The results suggested that dolphins do not use a single feature across all object sets or a linear combination of six echo features. Five features appeared to be important to the dolphin on four or more sets: the echo spectrum shape, the pattern of changes in target strength and number of highlights as a function of object orientation, and peak and center frequency. These data suggest that dolphins use multiple features and integrate information across echoes from a range of object orientations.

  20. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  1. Feature Extraction Based on Decision Boundaries

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  2. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  3. Acoustic classification of battlefield transient events using wavelet sub-band features

    NASA Astrophysics Data System (ADS)

    Azimi-Sadjadi, M. R.; Jiang, Y.; Srinivasan, S.

    2007-04-01

    Detection, localization and classification of battlefield acoustic transient events are of great importance especially for military operations in urban terrain (MOUT). Generally, there can be a wide variety of battlefield acoustic transient events such as different caliber gunshots, artillery fires, and mortar fires. The discrimination of different types of transient sources is plagued by highly non-stationary nature of these signals, which makes the extraction of representative features a challenging task. This is compounded by the variations in the environmental and operating conditions and existence of a wide range of possible interference. This paper presents new approaches for transient signal estimation and feature extraction from acoustic signatures collected by several distributed sensor nodes. A maximum likelihood (ML)-based method is developed to remove noise/interference and fading effects and restore the acoustic transient signals. The estimated transient signals are then represented using wavelets. The multi-resolution property of the wavelets allows for capturing fine details in the transient signals that can be utilized to successfully classify them. Wavelet sub-band higher order moments and energy-based features are used to characterize the transient signals. The discrimination ability of the subband features for transient signal classification has been demonstrated on several different caliber gunshots. Important findings and observations on these results are also presented.

  4. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  5. Feature extraction for MRI segmentation.

    PubMed

    Velthuizen, R P; Hall, L O; Clarke, L P

    1999-04-01

    Magnetic resonance images (MRIs) of the brain are segmented to measure the efficacy of treatment strategies for brain tumors. To date, no reproducible technique for measuring tumor size is available to the clinician, which hampers progress of the search for good treatment protocols. Many segmentation techniques have been proposed, but the representation (features) of the MRI data has received little attention. A genetic algorithm (GA) search was used to discover a feature set from multi-spectral MRI data. Segmentations were performed using the fuzzy c-means (FCM) clustering technique. Seventeen MRI data sets from five patients were evaluated. The GA feature set produces a more accurate segmentation. The GA fitness function that achieves the best results is the Wilks's lambda statistic when applied to FCM clusters. Compared to linear discriminant analysis, which requires class labels, the same or better accuracy is obtained by the features constructed from a GA search without class labels, allowing fully operator independent segmentation. The GA approach therefore provides a better starting point for the measurement of the response of a brain tumor to treatment.

  6. Local feature point extraction for quantum images

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Xu, Kai; Gao, Yinghui; Wilson, Richard

    2015-05-01

    Quantum image processing has been a hot issue in the last decade. However, the lack of the quantum feature extraction method leads to the limitation of quantum image understanding. In this paper, a quantum feature extraction framework is proposed based on the novel enhanced quantum representation of digital images. Based on the design of quantum image addition and subtraction operations and some quantum image transformations, the feature points could be extracted by comparing and thresholding the gradients of the pixels. Different methods of computing the pixel gradient and different thresholds can be realized under this quantum framework. The feature points extracted from quantum image can be used to construct quantum graph. Our work bridges the gap between quantum image processing and graph analysis based on quantum mechanics.

  7. Texture Analysis and Cartographic Feature Extraction.

    DTIC Science & Technology

    1985-01-01

    Investigations into using various image descriptors as well as developing interactive feature extraction software on the Digital Image Analysis Laboratory...system. Originator-supplied keywords: Ad-Hoc image descriptor; Bayes classifier; Bhattachryya distance; Clustering; Digital Image Analysis Laboratory

  8. Automatic computational models of acoustical category features: Talking versus singing

    NASA Astrophysics Data System (ADS)

    Gerhard, David

    2003-10-01

    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  9. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  10. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  11. Extraction of essential features by quantum density

    NASA Astrophysics Data System (ADS)

    Wilinski, Artur

    2016-09-01

    In this paper we consider the problem of feature extraction, as an essential and important search of dataset. This problem describe the real ownership of the signals and images. Searches features are often difficult to identify because of data complexity and their redundancy. Here is shown a method of finding an essential features groups, according to the defined issues. To find the hidden attributes we use a special algorithm DQAL with the quantum density for thej-th features from original data, that indicates the important set of attributes. Finally, they have been generated small sets of attributes for subsets with different properties of features. They can be used to the construction of a small set of essential features. All figures were made in Matlab6.

  12. On image matrix based feature extraction algorithms.

    PubMed

    Wang, Liwei; Wang, Xiao; Feng, Jufu

    2006-02-01

    Principal component analysis (PCA) and linear discriminant analysis (LDA) are two important feature extraction methods and have been widely applied in a variety of areas. A limitation of PCA and LDA is that when dealing with image data, the image matrices must be first transformed into vectors, which are usually of very high dimensionality. This causes expensive computational cost and sometimes the singularity problem. Recently two methods called two-dimensional PCA (2DPCA) and two-dimensional LDA (2DLDA) were proposed to overcome this disadvantage by working directly on 2-D image matrices without a vectorization procedure. The 2DPCA and 2DLDA significantly reduce the computational effort and the possibility of singularity in feature extraction. In this paper, we show that these matrices based 2-D algorithms are equivalent to special cases of image block based feature extraction, i.e., partition each image into several blocks and perform standard PCA or LDA on the aggregate of all image blocks. These results thus provide a better understanding of the 2-D feature extraction approaches.

  13. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  14. Speech feature extracting based on DSP

    NASA Astrophysics Data System (ADS)

    Niu, Jingtao; Shi, Zhongke

    2003-09-01

    In this paper, for the voiced frame in the speech processing, the implementations of LPC prognosticate coefficient resolution by Levisohn-Durbin algorithm on the DSP based system was proposed, and also the implementation of L. R. Rabiner basic frequency estimation is discussed. At the end of this paper, several new methods of sound feature extraction only by voiced frame is also discussed.

  15. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  16. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  17. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  18. Distributed feature extraction for event identification.

    SciTech Connect

    Berry, Nina M.; Ko, Teresa H.

    2004-05-01

    An important component of ubiquitous computing is the ability to quickly sense the dynamic environment to learn context awareness in real-time. To pervasively capture detailed information of movements, we present a decentralized algorithm for feature extraction within a wireless sensor network. By approaching this problem in a distributed manner, we are able to work within the real constraint of wireless battery power and its effects on processing and network communications. We describe a hardware platform developed for low-power ubiquitous wireless sensing and a distributed feature extraction methodology which is capable of providing more information to the user of events while reducing power consumption. We demonstrate how the collaboration between sensor nodes can provide a means of organizing large networks into information-based clusters.

  19. Features vs. Feelings: Dissociable representations of the acoustic features and valence of aversive sounds

    PubMed Central

    Kumar, Sukhbinder; von Kriegstein, Katharina; Friston, Karl; Griffiths, Timothy D

    2012-01-01

    This study addresses the neuronal representation of aversive sounds that are perceived as unpleasant. Functional magnetic resonance imaging (fMRI) in humans demonstrated responses in the amygdala and auditory cortex to aversive sounds. We show that the amygdala encodes both the acoustic features of a stimulus and its valence (perceived unpleasantness). Dynamic Causal Modelling (DCM) of this system revealed that evoked responses to sounds are relayed to the amygdala via auditory cortex. While acoustic features modulate effective connectivity from auditory cortex to the amygdala, the valence modulates the effective connectivity from amygdala to the auditory cortex. These results support a complex (recurrent) interaction between the auditory cortex and amygdala based on object-level analysis in the auditory cortex that portends the assignment of emotional valence in amygdala that in turn influences the representation of salient information in auditory cortex. PMID:23055488

  20. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  1. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  2. The Effect of Dynamic Acoustical Features on Musical Timbre

    NASA Astrophysics Data System (ADS)

    Hajda, John M.

    Timbre has been an important concept for scientific exploration of music at least since the time of Helmholtz ([1877] 1954). Since Helmholtz's time, a number of studies have defined and investigated acoustical features of musical instrument tones to determine their perceptual importance, or salience (e.g., Grey, 1975, 1977; Kendall, 1986; Kendall et al., 1999; Luce and Clark, 1965; McAdams et al., 1995, 1999; Saldanha and Corso, 1964; Wedin and Goude, 1972). Most of these studies have considered only nonpercussive, or continuant, tones of Western orchestral instruments (or emulations thereof). In the past few years, advances in computing power and programming have made possible and affordable the definition and control of new acoustical variables. This chapter gives an overview of past and current research, with a special emphasis on the time-variant aspects of musical timbre. According to common observation, "music is made of tones in time" (Spaeth, 1933). We will also consider the fact that music is made of "time in tones."

  3. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  4. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  5. Concrete Slump Classification using GLCM Feature Extraction

    NASA Astrophysics Data System (ADS)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  6. Extraction and Classification of Human Gait Features

    NASA Astrophysics Data System (ADS)

    Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi

    In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.

  7. Morphological theory in image feature extraction

    NASA Astrophysics Data System (ADS)

    Gui, Feng; Lin, QiWei

    2003-06-01

    As we know that morphology is the technique that based upon set theory and it can be used for binary image processing and gray image processing. The principle and the geometrical meaning of morphological boundary detecting for image were discussed in this paper, and the selecting of structure element was analyzed. Comparison was made between morphological boundary detecting and traditional boundary detecting method, conclusion that morphological boundary detecting method has better compatibility and anti-interference capability was reached. The method was also used for L.V. cineangiograms processing. In this paper we hoped to build up a foundation for automatic detection of L.V. contours based on the features of L.V. cineangiograms and Morphological theory, for the further study of L.V. wall motion abnormalities, because wall motion abnormalities of L.V. due to myocardia ischeamia caused by coronary atherosclerosis is a significant feature of Atherosclerotic coronary heart disease (CHD). An algorithm that based on morphology for L.V. contours extracting was developed in this paper.

  8. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  9. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    NASA Astrophysics Data System (ADS)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  10. Analysis of MABEL data for feature extraction

    NASA Astrophysics Data System (ADS)

    Magruder, L.; Neuenschwander, A. L.; Wharton, M.

    2011-12-01

    MABEL (Multiple Altimeter Beam Experimental Lidar) is a test bed representation for ICESat-2 with a high repetition rate, low laser pulse energy and photon-counting detection on an airborne platform. MABEL data can be scaled to simulate ICESat-2 data products and is a demonstration proving critical for model validation and algorithm development. The recent MABEL flights over White Sands Missile in New Mexico (WSMR) have provided especially useful insight for the potential processing schemes of this type of data as well as how to extract specific geophysical or passive optical features. Although the MABEL data has not been precisely geolocated to date, approximate geolocations were derived using interpolated GPS data and aircraft attitude. In addition to providing indication of expected signal response over specific types of terrain/targets, the availability of MABEL data has also facilitated preliminary development into new types of noise filtering for photon-counting data products that will contribute to capabilities associated with future capabilities for ICESat-2 data extraction. One particular useful methodology uses a combination of cluster weighting and neighbor-count weighting. For weighting clustered points, each individual point is tagged with an average distance to its neighbors within an established threshold. Histograms of the mean values are created for both a pure noise section and a signal-noise mixture section, and a deconvolution of these histograms gives a normal distribution for the signal. A fitted Gaussian is used to calculate a threshold for the average distances. This removes locally sparse points, so then a regular neighborhood-count filter is used for a larger search radius. It seems to work better with high-noise cases and allows for improved signal recovery without being computationally expensive. One specific MABEL nadir channel ground track provided returns from several distinct ground markers that included multiple mounds, an elevated

  11. Feature extraction and integration underlying perceptual decision making during courtship behavior.

    PubMed

    Clemens, Jan; Ronacher, Bernhard

    2013-07-17

    Traditionally, perceptual decision making is studied in trained animals and carefully controlled tasks. Here, we sought to elucidate the stimulus features and their combination underlying a naturalistic behavior--female decision making during acoustic courtship in grasshoppers. Using behavioral data, we developed a model in which stimulus features were extracted by physiologically plausible models of sensory neurons from the time-varying stimulus. This sensory evidence was integrated over the stimulus duration and combined to predict the behavior. We show that decisions were determined by the interaction of an excitatory and a suppressive stimulus feature. The observed increase of behavioral response with stimulus intensity was the result of an increase of the excitatory feature's gain that was not controlled by an equivalent increase of the suppressive feature. Differences in how these two features were combined could explain interindividual variability. In addition, the mapping between the two stimulus features and different parameters of the song led us to re-evaluate the cues underlying acoustic communication. Our framework provided a rich and plausible explanation of behavior in terms of two stimulus cues that were extracted by models of sensory neurons and combined through excitatory-inhibitory interactions. We thus were able to link single neuron's feature selectivity and network computations with decision making in a natural task. This data-driven approach has the potential to advance our understanding of decision making in other systems and can inform the search for the neural correlates of behavior.

  12. [A comparative study of pathological voice based on traditional acoustic characteristics and nonlinear features].

    PubMed

    Gan, Deying; Hu, Weiping; Zhao, Bingxin

    2014-10-01

    By analyzing the mechanism of pronunciation, traditional acoustic parameters, including fundamental frequency, Mel frequency cepstral coefficients (MFCC), linear prediction cepstrum coefficient (LPCC), frequency perturbation, amplitude perturbation, and nonlinear characteristic parameters, including entropy (sample entropy, fuzzy entropy, multi-scale entropy), box-counting dimension, intercept and Hurst, are extracted as feature vectors for identification of pathological voice. Seventy-eight normal voice samples and 73 pathological voice samples for /a/, and 78 normal samples and 80 pathological samples for /i/ are recognized based on support vector machine (SVM). The results showed that compared with traditional acoustic parameters, nonlinear characteristic parameters could be well used to distinguish between healthy and pathological voices, and the recognition rates for /a/ were all higher than those for /i/ except for multi-scale entropy. That is why the /a/ sound data is used widely in related research at home and abroad for obtaining better identification of pathological voices. Adopting multi-scale entropy for /i/ could obtain higher recognition rate than /a/ between healthy and pathological samples, which may provide some useful inspiration for evaluating vocal compensatory function.

  13. Automatic speech recognition using articulatory features from subject-independent acoustic-to-articulatory inversion.

    PubMed

    Ghosh, Prasanta Kumar; Narayanan, Shrikanth

    2011-10-01

    An automatic speech recognition approach is presented which uses articulatory features estimated by a subject-independent acoustic-to-articulatory inversion. The inversion allows estimation of articulatory features from any talker's speech acoustics using only an exemplary subject's articulatory-to-acoustic map. Results are reported on a broad class phonetic classification experiment on speech from English talkers using data from three distinct English talkers as exemplars for inversion. Results indicate that the inclusion of the articulatory information improves classification accuracy but the improvement is more significant when the speaking style of the exemplar and the talker are matched compared to when they are mismatched.

  14. A gearbox fault diagnosis scheme based on near-field acoustic holography and spatial distribution features of sound field

    NASA Astrophysics Data System (ADS)

    Lu, Wenbo; Jiang, Weikang; Yuan, Guoqing; Yan, Li

    2013-05-01

    Vibration signal analysis is the main technique in machine condition monitoring or fault diagnosis, whereas in some cases vibration-based diagnosis is restrained because of its contact measurement. Acoustic-based diagnosis (ABD) with non-contact measurement has received little attention, although sound field may contain abundant information related to fault pattern. A new scheme of ABD for gearbox based on near-field acoustic holography (NAH) and spatial distribution features of sound field is presented in this paper. It focuses on applying distribution information of sound field to gearbox fault diagnosis. A two-stage industrial helical gearbox is experimentally studied in a semi-anechoic chamber and a lab workshop, respectively. Firstly, multi-class faults (mild pitting, moderate pitting, severe pitting and tooth breakage) are simulated, respectively. Secondly, sound fields and corresponding acoustic images in different gearbox running conditions are obtained by fast Fourier transform (FFT) based NAH. Thirdly, by introducing texture analysis to fault diagnosis, spatial distribution features are extracted from acoustic images for capturing fault patterns underlying the sound field. Finally, the features are fed into multi-class support vector machine for fault pattern identification. The feasibility and effectiveness of our proposed scheme is demonstrated on the good experimental results and the comparison with traditional ABD method. Even with strong noise interference, spatial distribution features of sound field can reliably reveal the fault patterns of gearbox, and thus the satisfactory accuracy can be obtained. The combination of histogram features and gray level gradient co-occurrence matrix features is suggested for good diagnosis accuracy and low time cost.

  15. Gunshot acoustic signature specific features and false alarms reduction

    NASA Astrophysics Data System (ADS)

    Donzier, Alain; Millet, Joel

    2005-05-01

    This paper provides a detailed analysis of the most specific parameters of gunshot signatures through models as well as through real data. The models for the different contributions to gunshot typical signature (shock and muzzle blast) are presented and used to discuss the variation of measured signatures over the different environmental conditions and shot configurations. The analysis is followed by a description of the performance requirements for gunshot detection systems, from sniper detection that was the main concern 10 years ago, to the new and more challenging conditions faced in today operations. The work presented examines the process of how systems are deployed and used as well as how the operational environment has changed. The main sources of false alarms and new threats such as RPGs and mortars that acoustic gunshot detection systems have to face today are also defined and discussed. Finally, different strategies for reducing false alarms are proposed based on the acoustic signatures. Different strategies are presented through various examples of specific missions ranging from vehicle protection to area protection. These strategies not only include recommendation on how to handle acoustic information for the best efficiency of the acoustic detector but also recommends some add-on sensors to enhance system overall performance.

  16. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  17. Estimation of glottal source features from the spectral envelope of the acoustic speech signal

    NASA Astrophysics Data System (ADS)

    Torres, Juan Felix

    Speech communication encompasses diverse types of information, including phonetics, affective state, voice quality, and speaker identity. From a speech production standpoint, the acoustic speech signal can be mainly divided into glottal source and vocal tract components, which play distinct roles in rendering the various types of information it contains. Most deployed speech analysis systems, however, do not explicitly represent these two components as distinct entities, as their joint estimation from the acoustic speech signal becomes an ill-defined blind deconvolution problem. Nevertheless, because of the desire to understand glottal behavior and how it relates to perceived voice quality, there has been continued interest in explicitly estimating the glottal component of the speech signal. To this end, several inverse filtering (IF) algorithms have been proposed, but they are unreliable in practice because of the blind formulation of the separation problem. In an effort to develop a method that can bypass the challenging IF process, this thesis proposes a new glottal source information extraction method that relies on supervised machine learning to transform smoothed spectral representations of speech, which are already used in some of the most widely deployed and successful speech analysis applications, into a set of glottal source features. A transformation method based on Gaussian mixture regression (GMR) is presented and compared to current IF methods in terms of feature similarity, reliability, and speaker discrimination capability on a large speech corpus, and potential representations of the spectral envelope of speech are investigated for their ability represent glottal source variation in a predictable manner. The proposed system was found to produce glottal source features that reasonably matched their IF counterparts in many cases, while being less susceptible to spurious errors. The development of the proposed method entailed a study into the aspects

  18. Deconvolution and signal extraction in geophysics and acoustics

    NASA Astrophysics Data System (ADS)

    Sibul, Leon H.; Roan, Michael J.; Erling, Josh

    2002-11-01

    Deconvolution and signal extraction are fundamental signal processing techniques in geophysics and acoustics. An introductory overview of the standard second-order methods and minimum entropy deconvolution is presented. Limitations of the second-order methods are discussed and the need for more general methods is established. The minimum entropy deconvolution (MED), as proposed by Wiggins in 1977, is a technique for the deconvolution of seismic signals that overcomes limitations of the second-order method of deconvolution. The unifying conceptual framework MED, as presented in the Donoho's classical paper (1981) is discussed. The basic assumption of MED is that input signals to the forward filter are independent, identically distributed non-Gaussian random processes. A forward convolution filter ''makes'' the output of the forward filter more Gaussian which increases its entropy. The minimization of entropy restores the original non-Gaussian input. We also give an overview of recent developments in blind deconvolution (BDC), blind source separation (BSS), and blind signal extraction (BSE). The recent research in these areas uses information theoretic (IT) criteria (entropy, mutual information, K-L divergence, etc.) for optimization objective functions. Gradients of these objective functions are nonlinear functions, resulting in nonlinear algorithms. Some of the recursive algorithms for nonlinear optimization are reviewed.

  19. [Extraction method of the visual graphical feature from biomedical data].

    PubMed

    Li, Jing; Wang, Jinjia; Hong, Wenxue

    2011-10-01

    The vector space transformations such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA) or the kernel-based methods may be applied on the extracted feature from the field, which could improve the classification performance. A barycentre graphical feature extraction method of the star plot was proposed in the present study based on the graphical representation of multi-dimensional data. The feature order question of the graphical representation methods affecting the star plot was investigated and the feature order method was proposed based on the improved genetic algorithm (GA). For some biomedical datasets, such as breast cancer and diabetes, the obtained classification error of barycentre graphical feature of star plot in the GA based optimal feature order is very promising compared to the previously reported classification methods, and is superior to that of traditional feature extraction method.

  20. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA.

  1. Feature Extraction Using an Unsupervised Neural Network

    DTIC Science & Technology

    1991-05-03

    A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a

  2. Feature extraction of arc tracking phenomenon

    NASA Technical Reports Server (NTRS)

    Attia, John Okyere

    1995-01-01

    This document outlines arc tracking signals -- both the data acquisition and signal processing. The objective is to obtain the salient features of the arc tracking phenomenon. As part of the signal processing, the power spectral density is obtained and used in a MATLAB program.

  3. Universal Feature Extraction for Traffic Identification of the Target Category

    PubMed Central

    Shen, Jian

    2016-01-01

    Traffic identification of the target category is currently a significant challenge for network monitoring and management. To identify the target category with pertinence, a feature extraction algorithm based on the subset with highest proportion is presented in this paper. The method is proposed to be applied to the identification of any category that is assigned as the target one, but not restricted to certain specific category. We divide the process of feature extraction into two stages. In the stage of primary feature extraction, the feature subset is extracted from the dataset which has the highest proportion of the target category. In the stage of secondary feature extraction, the features that can distinguish the target and interfering categories are added to the feature subset. Our theoretical analysis and experimental observations reveal that the proposed algorithm is able to extract fewer features with greater identification ability of the target category. Moreover, the universality of the proposed algorithm proves to be available with the experiment that every category is set to be the target one. PMID:27832103

  4. Heuristical Feature Extraction from LIDAR Data and Their Visualization

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lohani, B.

    2011-09-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clusters and simplify them to be sent to a visualization engine.

  5. Feature Extraction for Structural Dynamics Model Validation

    SciTech Connect

    Farrar, Charles; Nishio, Mayuko; Hemez, Francois; Stull, Chris; Park, Gyuhae; Cornwell, Phil; Figueiredo, Eloi; Luscher, D. J.; Worden, Keith

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  6. Acoustic features to arousal and identity in disturbance calls of tree shrews (Tupaia belangeri).

    PubMed

    Schehka, Simone; Zimmermann, Elke

    2009-11-05

    Across mammalian species, comparable morphological and physiological constraints in the production of airborne vocalisations are suggested to lead to commonalities in the vocal conveyance of acoustic features to specific attributes of callers, such as arousal and individual identity. To explore this hypothesis we examined intra- and interindividual acoustic variation in chatter calls of tree shrews (Tupaia belangeri). The calls were induced experimentally by a disturbance paradigm and related to two defined arousal states of a subject. The arousal state of an animal was primarily operationalised by the habituation of the subject to a new environment and additionally determined by behavioural indicators of stress in tree shrews (tail-position and piloerection). We investigated whether the arousal state and indexical features of the caller, namely individual identity and sex, are conveyed acoustically. Frame-by-frame videographic and multiparametric sound analyses revealed that arousal and identity, but not sex of a caller reliably predicted spectral-temporal variation in sound structure. Furthermore, there was no effect of age or body weight on individual-specific acoustic features. Similar results in another call type of tree shrews and comparable findings in other mammalian lineages provide evidence that comparable physiological and morphological constraints in the production of airborne vocalisations across mammals lead to commonalities in acoustic features conveying arousal and identity, respectively.

  7. Image feature extraction using Gabor-like transform

    NASA Technical Reports Server (NTRS)

    Finegan, Michael K., Jr.; Wee, William G.

    1991-01-01

    Noisy and highly textured images were operated on with a Gabor-like transform. The results were evaluated to see if useful features could be extracted using spatio-temporal operators. The use of spatio-temporal operators allows for extraction of features containing simultaneous frequency and orientation information. This method allows important features, both specific and generic, to be extracted from images. The transformation was applied to industrial inspection imagery, in particular, a NASA space shuttle main engine (SSME) system for offline health monitoring. Preliminary results are given and discussed. Edge features were extracted from one of the test images. Because of the highly textured surface (even after scan line smoothing and median filtering), the Laplacian edge operator yields many spurious edges.

  8. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  9. Target re-acquisition using acoustic features with an autonomous underwater vehicle-borne sonar

    NASA Astrophysics Data System (ADS)

    Edwards, Joseph; Schmidt, Henrik

    2003-10-01

    Concurrent mapping and localization (CML) is a technique for unsupervised feature-based mapping of unknown environments, and is an essential tool for autonomous robots. For land robots, CML can be applied using video, laser, or acoustic sensors, while for autonomous underwater vehicles (AUVs) the only effective transducer in most situations is sonar. In the Generic Oceanographic Array Technology Sonar (GOATS) experiment series, CML was effectively demonstrated using a single AUV. A further hurdle in the full implementation of AUV minehunting is to re-acquire and identify targets of interest. Target re-acquisition allows other vehicles to be called into a target location to further investigate with adaptive sonar geometries or alternative sensor suites designed for classification. In this work, the features in the CML-generated map are extended from only spatial coordinates to include acoustic features such as spectral response. It is demonstrated that the inclusion of acoustic features aids in the global positioning within the map, although the fine positioning is still accomplished through standard CML. In addition, areas that are sparsely populated with targets, e.g., a sandy coastline, are shown to be more readily navigable using acoustic features.

  10. Fatigue features study on the crankshaft material of 42CrMo steel using acoustic emission

    NASA Astrophysics Data System (ADS)

    Shi, Yue; Dong, Lihong; Wang, Haidou; Li, Guolu; Liu, Shenshui

    2016-09-01

    Crankshaft is regarded as an important component of engines, and it is an important application of remanufacturing because of its high added value. However, the fatigue failure research of remanufactured crankshaft is still in its primary stage. Thus, monitoring and investigating the fatigue failure of the remanufacturing crankshaft is crucial. In this paper, acoustic emission (AE) technology and machine vision are used to monitor the four-point bending fatigue of 42CrMo, which is the material of crankshaft. The specimens are divided into two categories, namely, pre-existing crack and non-preexisting crack, which simulate the crankshaft and crankshaft blank, respectively. The analysis methods of parameter-based AE techniques, wavelet transform (WT) and SEM analysis are combined to identify the stage of fatigue failure. The stage of fatigue failure is the basis of using AE technology in the field of remanufacturing crankshafts. The experiment results show that the fatigue crack propagation style is a transgranular fracture and the fracture is a brittle fracture. The difference mainly depends on the form of crack initiation. Various AE signals are detected by parameter analysis method. Wavelet threshold denoising and WT are combined to extract the spectral features of AE signals at different fatigue failure stages.

  11. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  12. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  13. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  14. Influence of Architectural Features and Styles on Various Acoustical Measures in Churches

    NASA Astrophysics Data System (ADS)

    Carvalho, Antonio Pedro Oliveira De.

    This work reports on acoustical field measurements made in a major survey of 41 Catholic churches in Portugal that were built in the last 14 centuries. A series of monaural and binaural acoustical measurements was taken at multiple source/receiver positions in each church using the impulse response with noise burst method. The acoustical measures were Reverberation Time (RT), Early Decay Time (EDT), Clarity (C80), Definition (D), Center Time (TS), Loudness (L), Bass Ratios based on the Reverberation Time and Loudness rm (BR_-RT and rm BR_-L), Rapid Speech Transmission Index (RASTI), and the binaural Coherence (COH). The scope of this research is to investigate how the acoustical performance of Catholic churches relates to their architectural features and to determine simple formulas to predict acoustical measures by the use of elementary architectural parameters. Prediction equations were defined among the acoustical measures to estimate values at individual locations within each room as well as the mean values in each church. Best fits with rm R^2~0.9 were not uncommon among many of the measures. Within and interchurch differences in the data for the acoustical measures were also analyzed. The variations of RT and EDT were identified as much smaller than the variations of the other measures. The churches tested were grouped in eight architectural styles, and the effect of their evolution through time on these acoustical measures was investigated. Statistically significant differences were found regarding some architectural styles that can be traced to historical changes in Church history, especially to the Reformation period. Prediction equations were defined to estimate mean acoustical measures by the use of fifteen simple architectural parameters. The use of the Sabine and Eyring reverberation time equations was tested. The effect of coupled spaces was analyzed, and a new algorithm for the application of the Sabine equation was developed, achieving an average of

  15. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  16. Feature extraction with LIDAR data and aerial images

    NASA Astrophysics Data System (ADS)

    Mao, Jianhua; Liu, Yanjing; Cheng, Penggen; Li, Xianhua; Zeng, Qihong; Xia, Jing

    2006-10-01

    Raw LIDAR data is a irregular spacing 3D point cloud including reflections from bare ground, buildings, vegetation and vehicles etc., and the first task of the data analyses of point cloud is feature extraction. However, the interpretability of LIDAR point cloud is often limited due to the fact that no object information is provided, and the complex earth topography and object morphology make it impossible for a single operator to classify all the point cloud precisely 100%. In this paper, a hierarchy method for feature extraction with LIDAR data and aerial images is discussed. The aerial images provide us information of objects figuration and spatial distribution, and hierarchic classification of features makes it easy to apply automatic filters progressively. And the experiment results show that, using this method, it was possible to detect more object information and get a better result of feature extraction than using automatic filters alone.

  17. New approach in features extraction for EEG signal detection.

    PubMed

    Guerrero-Mosquera, Carlos; Vazquez, Angel Navia

    2009-01-01

    This paper describes a new approach in features extraction using time-frequency distributions (TFDs) for detecting epileptic seizures to identify abnormalities in electroencephalogram (EEG). Particularly, the method extracts features using the Smoothed Pseudo Wigner-Ville distribution combined with the McAulay-Quatieri sinusoidal model and identifies abnormal neural discharges. We propose a new feature based on the length of the track that, combined with energy and frequency features, allows to isolate a continuous energy trace from another oscillations when an epileptic seizure is beginning. We evaluate our approach using data consisting of 16 different seizures from 6 epileptic patients. The results show that our extraction method is a suitable approach for automatic seizure detection, and opens the possibility of formulating new criteria to detect and analyze abnormal EEGs.

  18. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  19. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    NASA Astrophysics Data System (ADS)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  20. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  1. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  2. Primary Progressive Apraxia of Speech: Clinical Features and Acoustic and Neurologic Correlates

    PubMed Central

    Strand, Edythe A.; Clark, Heather; Machulda, Mary; Whitwell, Jennifer L.; Josephs, Keith A.

    2015-01-01

    Purpose This study summarizes 2 illustrative cases of a neurodegenerative speech disorder, primary progressive apraxia of speech (AOS), as a vehicle for providing an overview of the disorder and an approach to describing and quantifying its perceptual features and some of its temporal acoustic attributes. Method Two individuals with primary progressive AOS underwent speech-language and neurologic evaluations on 2 occasions, ranging from 2.0 to 7.5 years postonset. Performance on several tests, tasks, and rating scales, as well as several acoustic measures, were compared over time within and between cases. Acoustic measures were compared with performance of control speakers. Results Both patients initially presented with AOS as the only or predominant sign of disease and without aphasia or dysarthria. The presenting features and temporal progression were captured in an AOS Rating Scale, an Articulation Error Score, and temporal acoustic measures of utterance duration, syllable rates per second, rates of speechlike alternating motion and sequential motion, and a pairwise variability index measure. Conclusions AOS can be the predominant manifestation of neurodegenerative disease. Clinical ratings of its attributes and acoustic measures of some of its temporal characteristics can support its diagnosis and help quantify its salient characteristics and progression over time. PMID:25654422

  3. Adaptive spectral window sizes for feature extraction from optical spectra

    NASA Astrophysics Data System (ADS)

    Kan, Chih-Wen; Lee, Andy Y.; Pham, Nhi; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2008-02-01

    We propose an approach to adaptively adjust the spectral window size used to extract features from optical spectra. Previous studies have employed spectral features extracted by dividing the spectra into several spectral windows of a fixed width. However, the choice of spectral window size was arbitrary. We hypothesize that by adaptively adjusting the spectral window sizes, the trends in the data will be captured more accurately. Our method was tested on a diffuse reflectance spectroscopy dataset obtained in a study of oblique polarization reflectance spectroscopy of oral mucosa lesions. The diagnostic task is to classify lesions into one of four histopathology groups: normal, benign, mild dysplasia, or severe dysplasia (including carcinoma). Nine features were extracted from each of the spectral windows. We computed the area (AUC) under Receiver Operating Characteristic curve to select the most discriminatory wavelength intervals. We performed pairwise classifications using Linear Discriminant Analysis (LDA) with leave-one-out cross validation. The results showed that for discriminating benign lesions from mild or severe dysplasia, the adaptive spectral window size features achieved AUC of 0.84, while a fixed spectral window size of 20 nm had AUC of 0.71, and an AUC of 0.64 is achieved with a large window size containing all wavelengths. The AUCs of all feature combinations were also calculated. These results suggest that the new adaptive spectral window size method effectively extracts features that enable accurate classification of oral mucosa lesions.

  4. Fatigue Level Estimation of Bill Based on Acoustic Signal Feature by Supervised SOM

    NASA Astrophysics Data System (ADS)

    Teranishi, Masaru; Omatu, Sigeru; Kosaka, Toshihisa

    Fatigued bills have harmful influence on daily operation of Automated Teller Machine(ATM). To make the fatigued bills classification more efficient, development of an automatic fatigued bill classification method is desired. We propose a new method to estimate bending rigidity of bill from acoustic signal feature of banking machines. The estimated bending rigidities are used as continuous fatigue level for classification of fatigued bill. By using the supervised Self-Organizing Map(supervised SOM), we estimate the bending rigidity from only the acoustic energy pattern effectively. The experimental result with real bill samples shows the effectiveness of the proposed method.

  5. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  6. Acoustical features of two Mayan monuments at Chichen Itza: Accident or design?

    NASA Astrophysics Data System (ADS)

    Lubman, David

    2002-11-01

    Chichen Itza dominated the early postclassic Maya world, ca. 900-1200 C.E. Two of its colossal monuments, the Great Ball Court and the temple of Kukulkan, reflect the sophisticated, hybrid culture of a Mexicanized Maya civilization. The architecture seems intended for ceremony and ritual drama. Deducing ritual practices will advance the understanding of a lost civilization, but what took place there is largely unknown. Perhaps acoustical science can add value. Unexpected and unusual acoustical features can be interpreted as intriguing clues or irrelevant accidents. Acoustical advocates believe that, when combined with an understanding of the Maya worldview, acoustical features can provide unique insights into how the Maya designed and used theater spaces. At Chichen Itza's monuments, sound reinforcement features improve rulers and priests ability to address large crowds, and Ball Court whispering galleries permit speech communication over unexpectedly large distances. Handclaps at Kukulkan stimulate chirps that mimic a revered bird (''Kukul''), thus reinforcing cultic beliefs. A ball striking playing field wall stimulates flutter echoes at the Great Ball Court; their strength and duration arguably had dramatic, mythic, and practical significance. Interpretations of the possible mythic, magic, and political significance of sound phenomena at these Maya monuments strongly suggests intentional design.

  7. Texture feature extraction methods for microcalcification classification in mammograms

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Pourabdollah-Nezhad, Siamak; Rafiee Rad, Farshid

    2000-06-01

    We present development, application, and performance evaluation of three different texture feature extraction methods for classification of benign and malignant microcalcifications in mammograms. The steps of the work accomplished are as follows. (1) A total of 103 regions containing microcalcifications were selected from a mammographic database. (2) For each region, texture features were extracted using three approaches: co-occurrence based method of Haralick; wavelet transformations; and multi-wavelet transformations. (3) For each set of texture features, most discriminating features and their optimal weights were found using a real-valued genetic algorithm (GA) and a training set. For each set of features and weights, a KNN classifier and a malignancy criterion were used to generate the corresponding ROC curve. The malignancy of a given sample was defined as the number of malignant neighbors among its K nearest neighbors. The GA found a population with the largest area under the ROC curve. (4) The best results obtained using each set of features were compared. The best set of features generated areas under the ROC curve ranging from 0.82 to 0.91. The multi-wavelet method outperformed the other two methods, and the wavelet features were superior to the Haralick features. Among the multi-wavelet methods, redundant initialization generated superior results compared to non-redundant initialization. For the best method, a true positive fraction larger than 0.85 and a false positive fraction smaller than 0.1 were obtained.

  8. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  9. Research on feature data extraction algorithms of printing

    NASA Astrophysics Data System (ADS)

    Sun, Zhihui; Ma, Jianzhuang

    2013-07-01

    The electric-carving printing ink cell image taken in complex lighting conditions with the traditional image processing algorithms can not be got the accurate edge information, so the feature data is not be accurately extracted. This paper use the improved P&M equation for ink cell image smoothing, while the eight-directions edge detection based Sobel is used for searching edge of ink cell, edge tracking algorithm make point of edge coordinate. These algorithms effectively reduce the influence of the unevenness light, accurately extract the feature data of the ink cell.

  10. Intelligibility Evaluation of Pathological Speech through Multigranularity Feature Extraction and Optimization

    PubMed Central

    Ma, Lin; Zhang, Mancai

    2017-01-01

    Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S-transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S-transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation. PMID:28194222

  11. Intelligibility Evaluation of Pathological Speech through Multigranularity Feature Extraction and Optimization.

    PubMed

    Fang, Chunying; Li, Haifeng; Ma, Lin; Zhang, Mancai

    2017-01-01

    Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S-transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S-transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.

  12. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  13. Impervious surface extraction using coupled spectral-spatial features

    NASA Astrophysics Data System (ADS)

    Yu, Xinju; Shen, Zhanfeng; Cheng, Xi; Xia, Liegang; Luo, Jiancheng

    2016-07-01

    Accurate extraction of urban impervious surface data from high-resolution imagery remains a challenging task because of the spectral heterogeneity of complex urban land-cover types. Since the high-resolution imagery simultaneously provides plentiful spectral and spatial features, the accurate extraction of impervious surfaces depends on effective extraction and integration of spectral-spatial multifeatures. Different features have different importance for determining a certain class; traditional multifeature fusion methods that treat all features equally during classification cannot utilize the joint effect of multifeatures fully. A fusion method of distance metric learning (DML) and support vector machines is proposed to find the impervious and pervious subclasses from Chinese ZiYuan-3 (ZY-3) imagery. In the procedure of finding appropriate spectral and spatial feature combinations with DML, optimized distance metric was obtained adaptively by learning from the similarity side-information generated from labeled samples. Compared with the traditional vector stacking method that used each feature equally for multifeatures fusion, the approach achieves an overall accuracy of 91.6% (4.1% higher than the prior one) for a suburban dataset, and an accuracy of 92.7% (3.4% higher) for a downtown dataset, indicating the effectiveness of the method for accurately extracting urban impervious surface data from ZY-3 imagery.

  14. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  15. Extraction of features from 3D laser scanner cloud data

    NASA Astrophysics Data System (ADS)

    Chan, Vincent H.; Bradley, Colin H.; Vickers, Geoffrey W.

    1997-12-01

    One of the road blocks on the path of automated reverse engineering has been the extraction of useful data from the copious range data generated from 3-D laser scanning systems. A method to extract the relevant features of a scanned object is presented. A 3-D laser scanner is automatically directed to obtain discrete laser cloud data on each separate patch that constitutes the object's surface. With each set of cloud data treated as a separate entity, primitives are fitted to the data resulting in a geometric and topologic database. Using a feed-forewarn neural network, the data is analyzed for geometric combinations that make up machine features such as through holes and slots. These features are required for the reconstruction of the solid model by a machinist or feature based CAM algorithms, thus completing the reverse engineering cycle.

  16. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  17. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  18. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  19. Aero-acoustic features of internal and external chamfered Hartmann whistles: A comparative study

    NASA Astrophysics Data System (ADS)

    Narayanan, S.; Srinivasan, K.; Sundararajan, T.

    2014-02-01

    The efficient way of chamfering at the mouth of Hartmann whistles in generating higher acoustic emission levels are experimentally demonstrated in this paper. The relevant parameters of the present work comprise internal and external-chamfer angles (15°, 30°), cavity-length, nozzle-to-cavity-distance and jet pressure ratios. The frequency and amplitude characteristics of internal and external, chamfered-Hartmann whistles are compared in detail to ascertain the role of chamfering in enhancing acoustic radiations. The high frequencies possessed by the internal chamfered whistles as compared to the external ones indicate that it amplifies the resonance. It is observed that the internal chamfered whistles exhibit higher directivity than the external chamfered ones. Further, it is noticed that the acoustic-power and efficiency are also higher for the internal chamfered whistles. The shadowgraph sequences reveal the variance in flow-shock oscillations as well as the spill-over features at the mouth of internal and external, chamfered cavities. The presence of large mass flow as well as its subsequent increase of spill-over as a result of enlarged mouth in internal chamfered whistles, leads to the generation of high intensity acoustic radiation than the external chamfered ones. Thus, the internal chamfer proves to be the best passive control device for augmented sound pressure levels and acoustic efficiencies in resonance cavities.

  20. Alarming features: birds use specific acoustic properties to identify heterospecific alarm calls

    PubMed Central

    Fallow, Pamela M.; Pitcher, Benjamin J.; Magrath, Robert D.

    2013-01-01

    Vertebrates that eavesdrop on heterospecific alarm calls must distinguish alarms from sounds that can safely be ignored, but the mechanisms for identifying heterospecific alarm calls are poorly understood. While vertebrates learn to identify heterospecific alarms through experience, some can also respond to unfamiliar alarm calls that are acoustically similar to conspecific alarm calls. We used synthetic calls to test the role of specific acoustic properties in alarm call identification by superb fairy-wrens, Malurus cyaneus. Individuals fled more often in response to synthetic calls with peak frequencies closer to those of conspecific calls, even if other acoustic features were dissimilar to that of fairy-wren calls. Further, they then spent more time in cover following calls that had both peak frequencies and frequency modulation rates closer to natural fairy-wren means. Thus, fairy-wrens use similarity in specific acoustic properties to identify alarms and adjust a two-stage antipredator response. Our study reveals how birds respond to heterospecific alarm calls without experience, and, together with previous work using playback of natural calls, shows that both acoustic similarity and learning are important for interspecific eavesdropping. More generally, this study reconciles contrasting views on the importance of alarm signal structure and learning in recognition of heterospecific alarms. PMID:23303539

  1. Data Feature Extraction for High-Rate 3-Phase Data

    SciTech Connect

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  2. Influences of gender and anthropometric features on inspiratory inhaler acoustics and peak inspiratory flow rate.

    PubMed

    Taylor, Terence E; Holmes, Martin S; Sulaiman, Imran; Costello, Richard W; Reilly, Richard B

    2015-01-01

    Inhalers are hand-held devices used to treat chronic respiratory diseases such as asthma and chronic obstructive pulmonary disease (COPD). Medication is delivered from an inhaler to the user through an inhalation maneuver. It is unclear whether gender and anthropometric features such as age, height, weight and body mass index (BMI) influence the acoustic properties of inspiratory inhaler sounds and peak inspiratory flow rate (PIFR) in inhalers. In this study, healthy male (n=9) and female (n=7) participants were asked to inhale at an inspiratory flow rate (IFR) of 60 L/min in four commonly used inhalers (Turbuhaler(™), Diskus(™), Ellipta(™) and Evohaler(™)). Ambient inspiratory sounds were recorded from the mouthpiece of each inhaler and over the trachea of each participant. Each participant's PIFR was also recorded for each of the four inhalers. Results showed that gender and anthropometric features have the potential to influence the spectral properties of ambient and tracheal inspiratory inhaler sounds. It was also observed that males achieved statistically significantly higher PIFRs in each inhaler in comparison to females (p<;0.05). Acoustic features were found to be significantly different across inhalers suggesting that acoustic features are modulated by the inhaler design and its internal resistance to airflow.

  3. Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.

    PubMed

    Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae

    2017-01-01

    Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.

  4. Validation points generation for LiDAR-extracted hydrologic features

    NASA Astrophysics Data System (ADS)

    Felicen, M. M.; De La Cruz, R. M.; Olfindo, N. T.; Borlongan, N. J. B.; Ebreo, D. J. R.; Perez, A. M. C.

    2016-10-01

    This paper discusses a novel way of generating sampling points of hydrologic features, specifically streams, irrigation network and inland wetlands, that could provide a promising measure of accuracy using combinations of traditional statistical sampling methods. Traditional statistical sampling techniques such as simple random sampling, systematic sampling, stratified sampling and disproportionate random sampling were all designed to generate points in an area where all the cells are classified and subjected to actual field validation. However, these sampling techniques are not applicable when generating points along linear features. This paper presents the Weighted Disproportionate Stratified Systematic Random Sampling (WDSSRS), a tool that combines the systematic and disproportionate stratified random sampling methods in generating points for accuracy computation. This tool makes use of a map series boundary shapefile covering around 27 by 27 kilometers at a scale of 1:50000, and the LiDAR-extracted hydrologic features shapefiles (e.g. wetland polygons and linear features of stream and irrigation network). Using the map sheet shapefile, a 10 x 10 grid is generated, and grid cells with water and non-water features are tagged accordingly. Cells with water features are checked for the presence of intersecting linear features, and the intersections are given higher weights in the selection of validation points. The grid cells with non-intersecting linear features are then evaluated and the remaining points are generated randomly along these features. For grid cells with nonwater features, the sample points are generated randomly.

  5. Actively controlled multiple-sensor system for feature extraction

    NASA Astrophysics Data System (ADS)

    Daily, Michael J.; Silberberg, Teresa M.

    1991-08-01

    Typical vision systems which attempt to extract features from a visual image of the world for the purposes of object recognition and navigation are limited by the use of a single sensor and no active sensor control capability. To overcome limitations and deficiencies of rigid single sensor systems, more and more researchers are investigating actively controlled, multisensor systems. To address these problems, we have developed a self-calibrating system which uses active multiple sensor control to extract features of moving objects. A key problem in such systems is registering the images, that is, finding correspondences between images from cameras of differing focal lengths, lens characteristics, and positions and orientations. The authors first propose a technique which uses correlation of edge magnitudes for continuously calibrating pan and tilt angles of several different cameras relative to a single camera with a wide angle field of view, which encompasses the views of every other sensor. A simulation of a world of planar surfaces, visual sensors, and a robot platform used to test active control for feature extraction is then described. Motion in the field of view of at least one sensor is used to center the moving object for several sensors, which then extract object features such as color, boundary, and velocity from the appropriate sensors. Results are presented from real cameras and from the simulated world.

  6. Efficient and robust feature extraction by maximum margin criterion.

    PubMed

    Li, Haifeng; Jiang, Tao; Zhang, Keshu

    2006-01-01

    In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem. In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient.

  7. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  8. Agonistic sounds in the skunk clownfish Amphiprion akallopisos: size-related variation in acoustic features.

    PubMed

    Colleye, O; Frederich, B; Vandewalle, P; Casadevall, M; Parmentier, E

    2009-09-01

    Fourteen individuals of the skunk clownfish Amphiprion akallopisos of different sizes and of different sexual status (non-breeder, male or female) were analysed for four acoustic features. Dominant frequency and pulse duration were highly correlated with standard length (r = 0.97), and were not related to sex. Both the dominant frequency and pulse duration were signals conveying information related to the size of the emitter, which implies that these sound characteristics could be useful in assessing size of conspecifics.

  9. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  10. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    PubMed

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).

  11. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  12. FeatureExtract—extraction of sequence annotation made easy

    PubMed Central

    Wernersson, Rasmus

    2005-01-01

    Work on a large number of biological problems benefits tremendously from having an easy way to access the annotation of DNA sequence features, such as intron/exon structure, the contents of promoter regions and the location of other genes in upsteam and downstream regions. For example, taking the placement of introns within a gene into account can help in a phylogenetic analysis of homologous genes. Designing experiments for investigating UTR regions using PCR or DNA microarrays require knowledge of known elements in UTR regions and the positions and strandness of other genes nearby on the chromosome. A wealth of such information is already known and documented in databases such as GenBank and the NCBI Human Genome builds. However, it usually requires significant bioinformatics skills and intimate knowledge of the data format to access this information. Presented here is a highly flexible and easy-to-use tool for extracting feature annotation from GenBank entries. The tool is also useful for extracting datasets corresponding to a particular feature (e.g. promoters). Most importantly, the output data format is highly consistent, easy to handle for the user and easy to parse computationally. The FeatureExtract web server is freely available for both academic and commercial use at . PMID:15980537

  13. Acoustic features of male baboon loud calls: Influences of context, age, and individuality

    NASA Astrophysics Data System (ADS)

    Fischer, Julia; Hammerschmidt, Kurt; Cheney, Dorothy L.; Seyfarth, Robert M.

    2002-03-01

    The acoustic structure of loud calls (``wahoos'') recorded from free-ranging male baboons (Papio cynocephalus ursinus) in the Moremi Game Reserve, Botswana, was examined for differences between and within contexts, using calls given in response to predators (alarm wahoos), during male contests (contest wahoos), and when a male had become separated from the group (contact wahoos). Calls were recorded from adolescent, subadult, and adult males. In addition, male alarm calls were compared with those recorded from females. Despite their superficial acoustic similarity, the analysis revealed a number of significant differences between alarm, contest, and contact wahoos. Contest wahoos are given at a much higher rate, exhibit lower frequency characteristics, have a longer ``hoo'' duration, and a relatively louder ``hoo'' portion than alarm wahoos. Contact wahoos are acoustically similar to contest wahoos, but are given at a much lower rate. Both alarm and contest wahoos also exhibit significant differences among individuals. Some of the acoustic features that vary in relation to age and sex presumably reflect differences in body size, whereas others are possibly related to male stamina and endurance. The finding that calls serving markedly different functions constitute variants of the same general call type suggests that the vocal production in nonhuman primates is evolutionarily constrained.

  14. Nonlinear feature extraction for MMW image classification: a supervised approach

    NASA Astrophysics Data System (ADS)

    Maskall, Guy T.; Webb, Andrew R.

    2002-07-01

    The specular nature of Radar imagery causes problems for ATR as small changes to the configuration of targets can result in significant changes to the resulting target signature. This adds to the challenge of constructing a classifier that is both robust to changes in target configuration and capable of generalizing to previously unseen targets. Here, we describe the application of a nonlinear Radial Basis Function (RBF) transformation to perform feature extraction on millimeter-wave (MMW) imagery of target vehicles. The features extracted were used as inputs to a nearest-neighbor classifier to obtain measures of classification performance. The training of the feature extraction stage was by way of a loss function that quantified the amount of data structure preserved in the transformation to feature space. In this paper we describe a supervised extension to the loss function and explore the value of using the supervised training process over the unsupervised approach and compare with results obtained using a supervised linear technique (Linear Discriminant Analysis --- LDA). The data used were Inverse Synthetic Aperture Radar (ISAR) images of armored vehicles gathered at 94GHz and were categorized as Armored Personnel Carrier, Main Battle Tank or Air Defense Unit. We find that the form of supervision used in this work is an advantage when the number of features used for classification is low, with the conclusion that the supervision allows information useful for discrimination between classes to be distilled into fewer features. When only one example of each class is used for training purposes, the LDA results are comparable to the RBF results. However, when an additional example is added per class, the RBF results are significantly better than those from LDA. Thus, the RBF technique seems better able to make use of the extra knowledge available to the system about variability between different examples of the same class.

  15. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  16. A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.

    1998-01-01

    The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.

  17. Do acoustic features of lion, Panthera leo, roars reflect sex and male condition?

    PubMed

    Pfefferle, Dana; West, Peyton M; Grinnell, Jon; Packer, Craig; Fischer, Julia

    2007-06-01

    Long distance calls function to regulate intergroup spacing, attract mating partners, and/or repel competitors. Therefore, they may not only provide information about the sex (if both sexes are calling) but also about the condition of the caller. This paper provides a description of the acoustic features of roars recorded from 18 male and 6 female lions (Panthera leo) living in the Serengeti National park, Tanzania. After analyzing whether these roars differ between the sexes, tests whether male roars may function as indicators of their fighting ability or condition were conducted. Therefore, call characteristics were tested for relation to anatomical features as size, mane color, or mane length. Call characteristics included acoustic parameters that previously had been implied as indicators of size and fighting ability, e.g., call length, fundamental frequency, and peak frequency. The analysis revealed differences in relation to sex, which were entirely explained by variation in body size. No evidence that acoustic variables were related to male condition was found, indicating that sexual selection might only be a weak force modulating the lion's roar. Instead, lion roars may have mainly been selected to effectively advertise territorial boundaries.

  18. Feature extraction on local jet space for texture classification

    NASA Astrophysics Data System (ADS)

    Oliveira, Marcos William da Silva; da Silva, Núbia Rosa; Manzanera, Antoine; Bruno, Odemir Martinez

    2015-12-01

    The proposal of this study is to analyze the texture pattern recognition over the local jet space looking forward to improve the texture characterization. Local jets decompose the image based on partial derivatives allowing the texture feature extraction be exploited in different levels of geometrical structures. Each local jet component evidences a different local pattern, such as, flat regions, directional variations and concavity or convexity. Subsequently, a texture descriptor is used to extract features from 0th, 1st and 2nd-derivative components. Four well-known databases (Brodatz, Vistex, Usptex and Outex) and four texture descriptors (Fourier descriptors, Gabor filters, Local Binary Pattern and Local Binary Pattern Variance) were used to validate the idea, showing in most cases an increase of the success rates.

  19. Optimal feature extraction for segmentation of Diesel spray images.

    PubMed

    Payri, Francisco; Pastor, José V; Palomares, Alberto; Juliá, J Enrique

    2004-04-01

    A one-dimensional simplification, based on optimal feature extraction, of the algorithm based on the likelihood-ratio test method (LRT) for segmentation in colored Diesel spray images is presented. If the pixel values of the Diesel spray and the combustion images are represented in RGB space, in most cases they are distributed in an area with a given so-called privileged direction. It is demonstrated that this direction permits optimal feature extraction for one-dimensional segmentation in the Diesel spray images, and some of its advantages compared with more-conventional one-dimensional simplification methods, including considerably reduced computational cost while accuracy is maintained within more than reasonable limits, are presented. The method has been successfully applied to images of Diesel sprays injected at room temperature as well as to images of sprays with evaporation and combustion. It has proved to be valid for several cameras and experimental arrangements.

  20. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  1. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  2. Features and Ground Automatic Extraction from Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  3. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  4. Prenatal features of Pena-Shokeir sequence with atypical response to acoustic stimulation.

    PubMed

    Pittyanont, Sirida; Jatavan, Phudit; Suwansirikul, Songkiat; Tongsong, Theera

    2016-09-01

    A fetal sonographic screening examination performed at 23 weeks showed polyhydramnios, micrognathia, fixed postures of all long bones, but no movement and no breathing. The fetus showed fetal heart rate acceleration but no movement when acoustic stimulation was applied with artificial larynx. All these findings persisted on serial examinations. The neonate was stillborn at 37 weeks and a final diagnosis of Pena-Shokeir sequence was made. In addition to typical sonographic features of Pena-Shokeir sequence, fetal heart rate accelerations with no movement in response to acoustic stimulation suggests that peripheral myopathy may possibly play an important role in the pathogenesis of the disease. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:459-462, 2016.

  5. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  6. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask.

  7. Effect of train type on annoyance and acoustic features of the rolling noise.

    PubMed

    Kasess, Christian H; Noll, Anton; Majdak, Piotr; Waubke, Holger

    2013-08-01

    This study investigated the annoyance associated with the rolling noise of different railway stock. Passbys of nine train types (passenger and freight trains) equipped with different braking systems were recorded. Acoustic features showed a clear distinction of the braking system with the A-weighted energy equivalent sound level (LAeq) showing a difference in the range of 10 dB between cast-iron braked trains and trains with disk or K-block brakes. Further, annoyance was evaluated in a psychoacoustic experiment where listeners rated the relative annoyance of the rolling noise for the different train types. Stimuli with and without the original LAeq differences were tested. For the original LAeq differences, the braking system significantly affected the annoyance with cast-iron brakes being most annoying, most likely as a consequence of the increased wheel roughness causing an increased LAeq. Contribution of the acoustic features to the annoyance was investigated revealing that the LAeq explained up to 94% of the variance. For the stimuli without differences in the LAeq, cast-iron braked train types were significantly less annoying and the spectral features explained up to 60% of the variance in the annoyance. The effect of these spectral features on the annoyance of the rolling noise is discussed.

  8. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech.

    PubMed

    Khalighinejad, Bahar; Cruzatto da Silva, Guilherme; Mesgarani, Nima

    2017-02-22

    Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders.SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for

  9. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech

    PubMed Central

    Khalighinejad, Bahar; Cruzatto da Silva, Guilherme

    2017-01-01

    Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders. SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for

  10. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  11. Pattern recognition and feature extraction with an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel

    2016-09-01

    Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.

  12. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  13. The optimal extraction of feature algorithm based on KAZE

    NASA Astrophysics Data System (ADS)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  14. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  15. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  16. System and method for investigating sub-surface features of a rock formation with acoustic sources generating coded signals

    SciTech Connect

    Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S

    2014-12-30

    A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.

  17. An Improved Approach of Mesh Segmentation to Extract Feature Regions

    PubMed Central

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons. PMID:26436657

  18. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  19. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  20. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  1. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  2. Automatic Multimode Guided Wave Feature Extraction Using Wavelet Fingerprints

    NASA Astrophysics Data System (ADS)

    Bingham, J. P.; Hinders, M. K.

    2010-02-01

    The development of automatic guided wave interpretation for detecting corrosion in aluminum aircraft structural stringers is described. The dynamic wavelet fingerprint technique (DWFP) is used to render the guided wave mode information in two-dimensional binary images. Automatic algorithms then extract DWFP features that correspond to the distorted arrival times of the guided wave modes of interest, which give insight into changes of the structure in the propagation path. To better understand how the guided wave modes propagate through real structures, parallel-processing elastic wave simulations using the elastodynamic finite integration technique (EFIT) has been performed. 3D simulations are used to examine models too complex for analytical solutions. They produce informative visualizations of the guided wave modes in the structures, and mimic the output from sensors placed in the simulation space. Using the previously developed mode extraction algorithms, the 3D EFIT results are compared directly to their experimental counterparts.

  3. Features extraction from the electrocatalytic gas sensor responses

    NASA Astrophysics Data System (ADS)

    Kalinowski, Paweł; Woźniak, Łukasz; Stachowiak, Maria; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    One of the types of gas sensors used for detection and identification of toxic-air pollutant is an electro-catalytic gas sensor. The electro-catalytic sensors are working in cyclic voltammetry mode, enable detection of various gases. Their response are in the form of I-V curves which contain information about the type and the concentration of measured volatile compound. However, additional analysis is required to provide the efficient recognition of the target gas. Multivariate data analysis and pattern recognition methods are proven to be useful tool for such application, but further investigations on the improvement of the sensor's responses processing are required. In this article the method for extraction of the parameters from the electro-catalytic sensor responses is presented. Extracted features enable the significant reduction of data dimension without the loss of the efficiency of recognition of four volatile air-pollutant, namely nitrogen dioxide, ammonia, hydrogen sulfide and sulfur dioxide.

  4. Neural network based feature extraction scheme for heart rate variability

    NASA Astrophysics Data System (ADS)

    Raymond, Ben; Nandagopal, Doraisamy; Mazumdar, Jagan; Taverner, D.

    1995-04-01

    Neural networks are extensively used in solving a wide range of pattern recognition problems in signal processing. The accuracy of pattern recognition depends to a large extent on the quality of the features extracted from the signal. We present a neural network capable of extracting the autoregressive parameters of a cardiac signal known as hear rate variability (HRV). Frequency specific oscillations in the HRV signal represent heart rate regulatory activity and hence cardiovascular function. Continual monitoring and tracking of the HRV data over a period of time will provide valuable diagnostic information. We give an example of the network applied to a short HRV signal and demonstrate the tracking performance of the network with a single sinusoid embedded in white noise.

  5. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  6. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  7. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  8. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  9. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  10. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  11. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  12. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  13. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  14. Exploring bubble oscillation and mass transfer enhancement in acoustic-assisted liquid-liquid extraction with a microfluidic device

    PubMed Central

    Xie, Yuliang; Chindam, Chandraprakash; Nama, Nitesh; Yang, Shikuan; Lu, Mengqian; Zhao, Yanhui; Mai, John D.; Costanzo, Francesco; Huang, Tony Jun

    2015-01-01

    We investigated bubble oscillation and its induced enhancement of mass transfer in a liquid-liquid extraction process with an acoustically-driven, bubble-based microfluidic device. The oscillation of individually trapped bubbles, of known sizes, in microchannels was studied at both a fixed frequency, and over a range of frequencies. Resonant frequencies were analytically identified and were found to be in agreement with the experimental observations. The acoustic streaming induced by the bubble oscillation was identified as the cause of this enhanced extraction. Experiments extracting Rhodanmine B from an aqueous phase (DI water) to an organic phase (1-octanol) were performed to determine the relationship between extraction efficiency and applied acoustic power. The enhanced efficiency in mass transport via these acoustic-energy-assisted processes was confirmed by comparisons against a pure diffusion-based process. PMID:26223474

  15. Exploring bubble oscillation and mass transfer enhancement in acoustic-assisted liquid-liquid extraction with a microfluidic device

    NASA Astrophysics Data System (ADS)

    Xie, Yuliang; Chindam, Chandraprakash; Nama, Nitesh; Yang, Shikuan; Lu, Mengqian; Zhao, Yanhui; Mai, John D.; Costanzo, Francesco; Huang, Tony Jun

    2015-07-01

    We investigated bubble oscillation and its induced enhancement of mass transfer in a liquid-liquid extraction process with an acoustically-driven, bubble-based microfluidic device. The oscillation of individually trapped bubbles, of known sizes, in microchannels was studied at both a fixed frequency, and over a range of frequencies. Resonant frequencies were analytically identified and were found to be in agreement with the experimental observations. The acoustic streaming induced by the bubble oscillation was identified as the cause of this enhanced extraction. Experiments extracting Rhodanmine B from an aqueous phase (DI water) to an organic phase (1-octanol) were performed to determine the relationship between extraction efficiency and applied acoustic power. The enhanced efficiency in mass transport via these acoustic-energy-assisted processes was confirmed by comparisons against a pure diffusion-based process.

  16. A bio-inspired feature extraction for robust speech recognition.

    PubMed

    Zouhir, Youssef; Ouni, Kaïs

    2014-01-01

    In this paper, a feature extraction method for robust speech recognition in noisy environments is proposed. The proposed method is motivated by a biologically inspired auditory model which simulates the outer/middle ear filtering by a low-pass filter and the spectral behaviour of the cochlea by the Gammachirp auditory filterbank (GcFB). The speech recognition performance of our method is tested on speech signals corrupted by real-world noises. The evaluation results show that the proposed method gives better recognition rates compared to the classic techniques such as Perceptual Linear Prediction (PLP), Linear Predictive Coding (LPC), Linear Prediction Cepstral coefficients (LPCC) and Mel Frequency Cepstral Coefficients (MFCC). The used recognition system is based on the Hidden Markov Models with continuous Gaussian Mixture densities (HMM-GM).

  17. Extracting autofluorescence spectral features for diagnosis of nasopharyngeal carcinoma

    NASA Astrophysics Data System (ADS)

    Lin, L. S.; Yang, F. W.; Xie, S. S.

    2012-09-01

    The aim of this study is to investigate the autofluorescence spectral characteristics of normal and cancerous nasopharyngeal tissues and to extract the potential spectral features for diagnosis of nasopharyngeal carcinoma (NPC). The autofluorescence excitation-emission matrix (EEM) of 37 normal and 34 cancerous nasopharyngeal tissues were recorded by a FLS920 spectrofluorimeter system in vitro. Based on the alteration in proportions of collagen and NAD(P)H, the integrated fluorescence intensity of I 455 ± 10 nm and I 380 ± 10 nm were used to calculated the ratio values by a two-peak ratio algorithm to diagnose NPC tissues at 340 nm excited. Furthermore by applying the receiver operating characteristic curve (ROC), the 340 nm excitation yielded an average sensitivity and specificity of 88.2 and 91.9%, respectively. These results may have practical implications for diagnosis of NPC.

  18. Crown Features Extraction from Low Altitude AVIRIS Data

    NASA Astrophysics Data System (ADS)

    Ogunjemiyo, S. O.; Roberts, D.; Ustin, S.

    2005-12-01

    Automated tree recognition and crown delineations are computer-assisted procedures for identifying individual trees and segmenting their crown boundaries on digital imagery. The success of the procedures is dependent on the quality of the image data and the physiognomy of the stand as evidence by previous studies, which have all used data with spatial resolution less than 1 m and average crown diameter to pixel size ratio greater than 4. In this study we explored the prospect of identifying individual tree species and extracting crown features from low altitude AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) data with spatial resolution of 4 m. The test site is a Douglas-fir and Western hemlock dominated old-growth conifer forest in the Pacific Northwest with average crown diameter of 12 m, which translates to a crown diameter pixel ratio less than 4 m; the lowest value ever used in similar studies. The analysis was carried out using AVIRIS reflectance imagery in the NIR band centered at 885 nm wavelength. The analysis required spatial filtering of the reflectance imagery followed by application of a tree identification algorithm based on maximum filter technique. For every identified tree location a crown polygon was delineated by applying crown segmentation algorithm. Each polygon boundary was characterized by a loop connecting pixels that were geometrically determined to define the crown boundary. Crown features were extracted based on the area covered by the polygons, and they include crown diameters, average distance between crowns, species spectral, pixel brightness at the identified tree locations, average brightness of pixels enclosed by the crown boundary and within crown variation in pixel brightness. Comparison of the results with ground reference data showed a high correlation between the two datasets and highlights the potential of low altitude AVIRIS data to provide the means to improve forest management and practices and estimates of critical

  19. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  20. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  1. An investigation of sex differences in acoustic features in black-capped chickadee (Poecile atricapillus) chick-a-dee calls.

    PubMed

    Campbell, Kimberley A; Hahn, Allison H; Congdon, Jenna V; Sturdy, Christopher B

    2016-09-01

    Sex differences have been identified in a number of black-capped chickadee vocalizations and in the chick-a-dee calls of other chickadee species [i.e., Carolina chickadees (Poecile carolinensis)]. In the current study, 12 acoustic features in black-capped chickadee chick-a-dee calls were investigated, including both frequency and duration measurements. Using permuted discriminant function analyses, these features were examined to determine if any features could be used to identify the sex of the caller. Only one note type (A notes) classified male and female calls at levels approaching significance. In particular, a permuted discriminant function analysis revealed that the start frequency of A notes best allowed for categorization between the sexes compared to any other acoustic parameter. This finding is consistent with previous research on Carolina chickadee chick-a-dee calls that found that the starting frequency differed between male- and female-produced A notes [Freeberg, Lucas, and Clucas (2003). J. Acoust. Soc. Am. 113, 2127-2136]. Taken together, these results and the results of studies with other chickadee species suggest that sex differences likely exist in the chick-a-dee call, specifically acoustic features in A notes, but that more complex features than those addressed here may be associated with the sex of the caller.

  2. Optimized shift-invariant wavelet packet feature extraction for electroencephalographic evoked responses.

    PubMed

    Harris, Arief R; Schwerdtfeger, Karsten; Strauss, Daniel J

    2008-01-01

    Local discriminant bases (LDB) have a major disadvantage in their representation which is sensitive to signal translations. The discriminant features will be not consistent when the same but shifted signal is applied. Thus, to overcome this problem, an approximate shift-invariant features extraction based on local discriminant bases is introduced. This technique is based on approximate shift-invariant wavelet packed decomposition which integrate a cost function for decimation decision in each sub-band expansion. This technique gives a consistent best tree selection both in top-down and bottom-up search method. It also provides a consistent wavelet shape in a shape-adapted wavelet method to determine the best wavelet library for a particular signal. This method has an advantage especially in electroencephalographic (EEG) measurement in which there is an inter-individual shift in time for the signals. An application of this method is provided by the discrimination between signals with transcranial magnetic stimulation (TMS) and acoustic-somatosensory stimulation (ASS).

  3. Acoustic Target Classification Using Multiscale Methods

    DTIC Science & Technology

    1998-01-01

    other vehicular activities well; because it represents dominant spectral peaks better than a short time Fourier transform. In the wavelet transform based...approach; multiscale features are obtained with a wavelet transform . Multiscale classification methods were applied to acoustic data collected at...This study considers the classification of acoustic signatures using features extracted at multiple scales from hierarchical models and a wavelet

  4. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  5. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  6. Acoustic measurement and morphological features of organic sediment deposits in combined sewer networks.

    PubMed

    Carnacina, Iacopo; Larrarte, Frédérique; Leonardi, Nicoletta

    2017-04-01

    The performance of sewer networks has important consequences from an environmental and social point of view. Poor functioning can result in flood risk and pollution at a large scale. Sediment deposits forming in sewer trunks might severely compromise the sewer line by affecting the flow field, reducing cross-sectional areas, and increasing roughness coefficients. In spite of numerous efforts, the morphological features of these depositional environments remain poorly understood. The interface between water and sediment remains inefficiently identified and the estimation of the stock of deposit is frequently inaccurate. In part, this is due to technical issues connected to difficulties in collecting accurate field measurements without disrupting existing morphologies. In this paper, results from an extensive field campaign are presented; during the campaign a new survey methodology based on acoustic techniques has been tested. Furthermore, a new algorithm for the detection of the soil-water interface, and therefore for the correct esteem of sediment stocks is proposed. Finally, results in regard to bed topography, and morphological features at two different field sites are presented and reveal that a large variability in bed forms is present along sewer networks.

  7. Differences in acoustic features of vocalizations produced by killer whales cross-socialized with bottlenose dolphins.

    PubMed

    Musser, Whitney B; Bowles, Ann E; Grebner, Dawn M; Crance, Jessica L

    2014-10-01

    Limited previous evidence suggests that killer whales (Orcinus orca) are capable of vocal production learning. However, vocal contextual learning has not been studied, nor the factors promoting learning. Vocalizations were collected from three killer whales with a history of exposure to bottlenose dolphins (Tursiops truncatus) and compared with data from seven killer whales held with conspecifics and nine bottlenose dolphins. The three whales' repertoires were distinguishable by a higher proportion of click trains and whistles. Time-domain features of click trains were intermediate between those of whales held with conspecifics and dolphins. These differences provided evidence for contextual learning. One killer whale spontaneously learned to produce artificial chirps taught to dolphins; acoustic features fell within the range of inter-individual differences among the dolphins. This whale also produced whistles similar to a stereotyped whistle produced by one dolphin. Thus, results provide further support for vocal production learning and show that killer whales are capable of contextual learning. That killer whales produce similar repertoires when associated with another species suggests substantial vocal plasticity and motivation for vocal conformity with social associates.

  8. Historical feature pattern extraction based network attack situation sensing algorithm.

    PubMed

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  9. PSO based Gabor wavelet feature extraction and tracking method

    NASA Astrophysics Data System (ADS)

    Sun, Hongguang; Bu, Qian; Zhang, Huijie

    2008-12-01

    The paper is the study of 2D Gabor wavelet and its application in grey image target recognition and tracking. The new optimization algorithms and technologies in the system realization are studied and discussed in theory and practice. Optimization of Gabor wavelet's parameters of translation, orientation, and scale is used to make it approximates a local image contour region. The method of Sobel edge detection is used to get the initial position and orientation value of optimization in order to improve the convergence speed. In the wavelet characteristic space, we adopt PSO (particle swarm optimization) algorithm to identify points on the security border of the system, it can ensure reliable convergence of the target, which can improve convergence speed; the time of feature extraction is shorter. By test in low contrast image, the feasibility and effectiveness of the algorithm are demonstrated by VC++ simulation platform in experiments. Adopting improve Gabor wavelet method in target tracking and making up its frame of tracking, which realize moving target tracking used algorithm, and realize steady target tracking in circumrotate affine distortion.

  10. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  11. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  12. Feature extraction and models for speech: An overview

    NASA Astrophysics Data System (ADS)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  13. Digital image comparison using feature extraction and luminance matching

    NASA Astrophysics Data System (ADS)

    Bachnak, Ray A.; Steidley, Carl W.; Funtanilla, Jeng

    2005-03-01

    This paper presents the results of comparing two digital images acquired using two different light sources. One of the sources is a 50-W metal halide lamp located in the compartment of an industrial borescope and the other is a 1 W LED placed at the tip of the insertion tube of the borescope. The two images are compared quantitatively and qualitatively using feature extraction and luminance matching approaches. Quantitative methods included the images' histograms, intensity profiles along a line segment, edges, and luminance measurement. Qualitative methods included image registration and linear conformal transformation with eight control points. This transformation is useful when shapes in the input image are unchanged, but the image is distorted by some combination of translation, rotation, and scaling. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by the operator. The paper presents the results and discusses the usefulness and shortcomings of various comparison methods.

  14. Ice images processing interface for automatic features extraction

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-02-01

    Canadian Coast Guard has the mandate to maintain the navigability of the St.-Lawrence seaway. It must prevent ice jam formation. Radar, sonar sensors and cameras are used to verify ice movement and keep a record of pertinent data. The cameras are placed along the seaway at strategic locations. Images are processed and saved for future reference. The Ice Images Processing Interface (IIPI) is an integral part of Ices Integrated System (IIS). This software processes images to extract the ice speed, concentration, roughness, and rate of flow. Ice concentration is computed from image segmentation using color models and a priori information. Speed is obtained from a region-matching algorithm. Both concentration and speed calculations are complex, since they require a calibration step involving on-site measurements. Color texture features provide ice roughness estimation. Rate of flow uses ice thickness, which is estimated from sonar sensors on the river floor. Our paper will present how we modeled and designed the IIPI, the issues involved and its future. For more reliable results, we suggest that meteorological data be provided, change in camera orientation be changed, sun reflections be anticipated, and more a priori information, such as radar images available at some sites, be included.

  15. Extracting Feature Points of the Human Body Using the Model of a 3D Human Body

    NASA Astrophysics Data System (ADS)

    Shin, Jeongeun; Ozawa, Shinji

    The purpose of this research is to recognize 3D shape features of a human body automatically using a 3D laser-scanning machine. In order to recognize the 3D shape features, we selected the 23 feature points of a body and modeled its 3D features. The set of 23 feature points consists of the motion axis of a joint, the main point for the bone structure of a human body. For extracting feature points of object model, we made 2.5D templates neighbor for each feature points were extracted according to the feature points of the standard model of human body. And the feature points were extracted by the template matching. The extracted feature points can be applied as body measurement, the 3D virtual fitting system for apparel etc.

  16. Classification of Benign and Malignant Breast Tumors in Ultrasound Images with Posterior Acoustic Shadowing Using Half-Contour Features.

    PubMed

    Zhou, Zhuhuang; Wu, Shuicai; Chang, King-Jen; Chen, Wei-Ren; Chen, Yung-Sheng; Kuo, Wen-Hung; Lin, Chung-Chih; Tsui, Po-Hsiang

    Posterior acoustic shadowing (PAS) can bias breast tumor segmentation and classification in ultrasound images. In this paper, half-contour features are proposed to classify benign and malignant breast tumors with PAS, considering the fact that the upper half of the tumor contour is less affected by PAS. Adaptive thresholding and disk expansion are employed to detect tumor contours. Based on the detected full contour, the upper half contour is extracted. For breast tumor classification, six quantitative feature parameters are analyzed for both full contours and half contours, including standard deviation of degree (SDD), which is proposed to describe tumor irregularity. Fifty clinical cases (40 with PAS and 10 without PAS) were used. Tumor circularity (TC) and SDD were both effective full- and half-contour parameters in classifying images without PAS. Half-contour TC [74 % accuracy, 72 % sensitivity, 76 % specificity, 0.78 area under the receiver operating characteristic curve (AUC), p > 0.05] significantly improved the classification of breast tumors with PAS compared to that with full-contour TC (54 % accuracy, 56 % sensitivity, 52 % specificity, 0.52 AUC, p > 0.05). Half-contour SDD (72 % accuracy, 76 % sensitivity, 68 % specificity, 0.81 AUC, p < 0.05) improved the classification of breast tumors with PAS compared to that with full-contour SDD (62 % accuracy, 80 % sensitivity, 44 % specificity, 0.61 AUC, p > 0.05). The proposed half-contour TC and SDD may be useful in classifying benign and malignant breast tumors in ultrasound images affected by PAS.

  17. Acoustic features contributing to the individuality of wild agile gibbon (Hylobates agilis agilis) songs.

    PubMed

    Oyakawa, Chisako; Koda, Hiroki; Sugiura, Hideki

    2007-07-01

    We examined acoustic individuality in wild agile gibbon Hylobates agilis agilis and determined the acoustic variables that contribute to individual discrimination using multivariate analyses. We recorded 125 female-specific songs (great calls) from six groups in west Sumatra and measured 58 acoustic variables for each great call. We performed principal component analysis to summarize the 58 variables into six acoustic principal components (PCs). Generally, each PC corresponded to a part of the great call. Significant individual differences were found across six individual gibbons in each of the six PCs. Moreover, strong acoustic individuality was found in the introductory and climax parts of the great call. In contrast, the terminal part contributed little to individual identification. Discriminant analysis showed that these PCs contributed to individual discrimination with high repeatability. Although we cannot conclude that agile gibbon use these acoustic components for individual discrimination, they are potential candidates for individual recognition.

  18. A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.

    DTIC Science & Technology

    target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.

  19. Neural Detection of Malicious Network Activities Using a New Direct Parsing and Feature Extraction Technique

    DTIC Science & Technology

    2015-09-01

    NETWORK ACTIVITIES USING A NEW DIRECT PARSING AND FEATURE EXTRACTION TECHNIQUE by Cheng Hong Low September 2015 Thesis Advisor: Phillip Pace Co...FEATURE EXTRACTION TECHNIQUE 5. FUNDING NUMBERS 6. AUTHOR(S) Low, Cheng Hong 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Center for...FEATURE EXTRACTION TECHNIQUE Cheng Hong Low Civlian, ST Aerospace, Singapore M.Sc., National University of Singapore, 2012 Submitted in

  20. Credible Set Estimation, Analysis, and Applications in Synthetic Aperture Radar Canonical Feature Extraction

    DTIC Science & Technology

    2015-03-26

    CREDIBLE SET ESTIMATION, ANALYSIS, AND APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Andrew C. Rexford, 1st Lieutenant...AND APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer...APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Andrew C. Rexford, B.S.E.E. 1st Lieutenant, USAF Committee Membership: Dr. Julie

  1. A robust technique for semantic annotation of group activities based on recognition of extracted features in video streams

    NASA Astrophysics Data System (ADS)

    Elangovan, Vinayak; Shirkhodaie, Amir

    2013-05-01

    Recognition and understanding of group activities can significantly improve situational awareness in Surveillance Systems. To maximize reliability and effectiveness of Persistent Surveillance Systems, annotations of sequential images gathered from video streams (i.e. imagery and acoustic features) must be fused together to generate semantic messages describing group activities (GA). To facilitate efficient fusion of extracted features from any physical sensors a common structure will suffice to ease integration of processed data into new comprehension. In this paper, we describe a framework for extraction and management of pertinent features/attributes vital for annotation of group activities reliably. A robust technique is proposed for fusion of generated events and entities' attributes from video streams. A modified Transducer Markup Language (TML) is introduced for semantic annotation of events and entities attributes. By aggregation of multi-attribute TML messages, we have demonstrated that salient group activities can be spatiotemporal can be reliable annotated. This paper discusses our experimental results; our analysis of a set of simulated group activities performed under different contexts and demonstrates the efficiency and effectiveness of the proposed modified TML data structure which facilitates seamless fusion of extracted information from video streams.

  2. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  3. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    PubMed

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-03-27

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage.

  4. Algorithm for heart rate extraction in a novel wearable acoustic sensor

    PubMed Central

    Imtiaz, Syed Anas; Aguilar–Pelaez, Eduardo; Rodriguez–Villegas, Esther

    2015-01-01

    Phonocardiography is a widely used method of listening to the heart sounds and indicating the presence of cardiac abnormalities. Each heart cycle consists of two major sounds – S1 and S2 – that can be used to determine the heart rate. The conventional method of acoustic signal acquisition involves placing the sound sensor at the chest where this sound is most audible. Presented is a novel algorithm for the detection of S1 and S2 heart sounds and the use of them to extract the heart rate from signals acquired by a small sensor placed at the neck. This algorithm achieves an accuracy of 90.73 and 90.69%, with respect to heart rate value provided by two commercial devices, evaluated on more than 38 h of data acquired from ten different subjects during sleep in a pilot clinical study. This is the largest dataset for acoustic heart sound classification and heart rate extraction in the literature to date. The algorithm in this study used signals from a sensor designed to monitor breathing. This shows that the same sensor and signal can be used to monitor both breathing and heart rate, making it highly useful for long-term wearable vital signs monitoring. PMID:26609401

  5. Algorithm for heart rate extraction in a novel wearable acoustic sensor.

    PubMed

    Chen, Guangwei; Imtiaz, Syed Anas; Aguilar-Pelaez, Eduardo; Rodriguez-Villegas, Esther

    2015-02-01

    Phonocardiography is a widely used method of listening to the heart sounds and indicating the presence of cardiac abnormalities. Each heart cycle consists of two major sounds - S1 and S2 - that can be used to determine the heart rate. The conventional method of acoustic signal acquisition involves placing the sound sensor at the chest where this sound is most audible. Presented is a novel algorithm for the detection of S1 and S2 heart sounds and the use of them to extract the heart rate from signals acquired by a small sensor placed at the neck. This algorithm achieves an accuracy of 90.73 and 90.69%, with respect to heart rate value provided by two commercial devices, evaluated on more than 38 h of data acquired from ten different subjects during sleep in a pilot clinical study. This is the largest dataset for acoustic heart sound classification and heart rate extraction in the literature to date. The algorithm in this study used signals from a sensor designed to monitor breathing. This shows that the same sensor and signal can be used to monitor both breathing and heart rate, making it highly useful for long-term wearable vital signs monitoring.

  6. Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.

    DTIC Science & Technology

    1981-03-01

    This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially

  7. Improved Dictionary Formation and Search for Synthetic Aperture Radar Canonical Shape Feature Extraction

    DTIC Science & Technology

    2014-03-27

    IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Matthew P. Crosser, Captain, USAF... SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-14-M-21 IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL

  8. Fast multi-feature paradigm for recording several mismatch negativities (MMNs) to phonetic and acoustic changes in speech sounds.

    PubMed

    Pakarinen, Satu; Lovio, Riikka; Huotilainen, Minna; Alku, Paavo; Näätänen, Risto; Kujala, Teija

    2009-12-01

    In this study, we addressed whether a new fast multi-feature mismatch negativity (MMN) paradigm can be used for determining the central auditory discrimination accuracy for several acoustic and phonetic changes in speech sounds. We recorded the MMNs in the multi-feature paradigm to changes in syllable intensity, frequency, and vowel length, as well as for consonant and vowel change, and compared these MMNs to those obtained with the traditional oddball paradigm. In addition, we examined the reliability of the multi-feature paradigm by repeating the recordings with the same subjects 1-7 days after the first recordings. The MMNs recorded with the multi-feature paradigm were similar to those obtained with the oddball paradigm. Furthermore, only minor differences were observed in the MMN amplitudes across the two recording sessions. Thus, this new multi-feature paradigm with speech stimuli provides similar results as the oddball paradigm, and the MMNs recorded with the new paradigm were reproducible.

  9. Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu

    Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.

  10. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  11. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  12. Feature Extraction from Parallel/Distributed Transient CFD Solutions

    DTIC Science & Technology

    2007-11-02

    visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically make sense.

  13. Image Algebra Application to Image Measurement and Feature Extraction

    NASA Astrophysics Data System (ADS)

    Ritter, Gerhard X.; Wilson, Joseph N.; Davidson, Jennifer L.

    1989-03-01

    It has been well established that the AFATL (Air Force Armament Technical Laboratory) Image Algebra is capable of expressing all image-to-image transformations [1,2] and that it is ideally suited for parallel image transformations {3,4]. In this paper we show how the algebra can also be applied to compactly express image-to-feature transforms including such sequential image-to-feature transforms as chain coding.

  14. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  15. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields

    PubMed Central

    Mesgarani, Nima; Deneve, Sophie

    2016-01-01

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by “explaining away,” a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons

  16. Semantic Feature Extraction for Brain CT Image Clustering Using Nonnegative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Liu, Weixiang; Peng, Fei; Feng, Shu; You, Jiangsheng; Chen, Ziqiang; Wu, Jian; Yuan, Kehong; Ye, Datian

    Brain computed tomography (CT) image based computer-aided diagnosis (CAD) system is helpful for clinical diagnosis and treatment. However it is challenging to extract significant features for analysis because CT images come from different people and CT operator. In this study, we apply nonnegative matrix factorization to extract both appearance and histogram based semantic features of images for clustering analysis as test. Our experimental results on normal and tumor CT images demonstrate that NMF can discover local features for both visual content and histogram based semantics, and the clustering results show that the semantic image features are superior to low level visual features.

  17. Association Rule Based Feature Extraction for Character Recognition

    NASA Astrophysics Data System (ADS)

    Dua, Sumeet; Singh, Harpreet

    Association rules that represent isomorphisms among data have gained importance in exploratory data analysis because they can find inherent, implicit, and interesting relationships among data. They are also commonly used in data mining to extract the conditions among attribute values that occur together frequently in a dataset [1]. These rules have wide range of applications, namely in the financial and retail sectors of marketing, sales, and medicine.

  18. (Almost) Automatic Semantic Feature Extraction from Technical Text

    DTIC Science & Technology

    1994-01-01

    independent manner. The next section will describe an existing NLP system ( KUDZU ) which has been developed at Mississippi State Uni- versity...EXISTING KUDZU SYSTEM The research described in this paper is part of a larger on- going project called the KUDZU (Knowledge Under Devel- opment from...Zero Understanding) project. This project is aimed at exploring the automation of extraction of infor- mation from technical texts. The KUDZU system

  19. Feature Extraction of High-Dimensional Structures for Exploratory Analytics

    DTIC Science & Technology

    2013-04-01

    development of a method to gain insight into HDD, particularly in the application of an analytic strategy to terrorist data. 15. SUBJECT TERMS...geodesic distance 4 (8); (3) the COIL-20 dataset; (4) word-features dataset; and (5) a Netflix dataset.* Although the manifold learners are

  20. Group Component Analysis for Multiblock Data: Common and Individual Feature Extraction.

    PubMed

    Zhou, Guoxu; Cichocki, Andrzej; Zhang, Yu; Mandic, Danilo P

    2016-11-01

    Real-world data are often acquired as a collection of matrices rather than as a single matrix. Such multiblock data are naturally linked and typically share some common features while at the same time exhibiting their own individual features, reflecting the underlying data generation mechanisms. To exploit the linked nature of data, we propose a new framework for common and individual feature extraction (CIFE) which identifies and separates the common and individual features from the multiblock data. Two efficient algorithms termed common orthogonal basis extraction (COBE) are proposed to extract common basis is shared by all data, independent on whether the number of common components is known beforehand. Feature extraction is then performed on the common and individual subspaces separately, by incorporating dimensionality reduction and blind source separation techniques. Comprehensive experimental results on both the synthetic and real-world data demonstrate significant advantages of the proposed CIFE method in comparison with the state-of-the-art.

  1. Arranging the order of feature-extraction operations in pattern classification

    NASA Astrophysics Data System (ADS)

    Hwang, Shu-Yuen; Tsai, Ronlon

    1992-02-01

    The typical process of statistical pattern classification is to first extract features from an object presented in an input image, then using the Bayesian decision rule, to compute the a posteriori probabilities that the object will be recognized by the system. When recursive Bayesian decision rule is used in this process, the phase of feature-extraction can be mixed with the phase of classification such that the a posteriori probabilities after adding each feature can be computed one by one. There are two reasons for thinking about which feature should be extracted first and which should go next. First, feature extraction is usually very time consuming. The extraction of any global feature from an object at least needs time in the order of the size of the object. Second, it is very often that we do not need to use all features in order to obtain a final classification; the a posteriori probabilities of some models will become zero after only a few features have been used. The problem is how to arrange the order of feature-extraction operations such that we can use a minimum order of operations to do the right classification. This paper presents two information-theoretical based heuristics for predicting the performance of feature-extraction operations. The prediction is then used to arrange the order of these operations. The first heuristic is the power of discrimination of each operation. The second heuristic is the power of justification of each operation and is used in the special case that some points in the feature space do not belong to any model. Both heuristics are computed from the distributions of models. The experimental result and its comparison to our previous works will be presented.

  2. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  3. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  4. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  5. TIN based image segmentation for man-made feature extraction

    NASA Astrophysics Data System (ADS)

    Jiang, Wanshou; Xie, Junfeng

    2005-10-01

    Traditionally, the splitting and merging algorithm of image segmentation is based on quad tree data structure, which is not convenient to express the topography of regions, the line segments and other information. A new framework is discussed in this paper. It is "TIN based image segmentation and grouping", in which edge information and region information are integrated directly. Firstly, the constrained triangle mesh is constructed with edge segments extracted by EDISON or other algorithm. And then, region growing based on triangles is processed to generate a coarse segmentation. At last, the regions are combined further with perceptual organization rule.

  6. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  7. Semantic control of feature extraction from natural scenes.

    PubMed

    Neri, Peter

    2014-02-05

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.

  8. Acoustic and Articulatory Features of Diphthong Production: A Speech Clarity Study

    ERIC Educational Resources Information Center

    Tasko, Stephen M.; Greilick, Kristin

    2010-01-01

    Purpose: The purpose of this study was to evaluate how speaking clearly influences selected acoustic and orofacial kinematic measures associated with diphthong production. Method: Forty-nine speakers, drawn from the University of Wisconsin X-Ray Microbeam Speech Production Database (J. R. Westbury, 1994), served as participants. Samples of clear…

  9. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  10. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data.

    PubMed

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-02-06

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew's Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions.

  11. A finite element propagation model for extracting normal incidence impedance in nonprogressive acoustic wave fields

    NASA Astrophysics Data System (ADS)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1995-04-01

    A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme was computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressirve wave environment such as that usually encountered in a commercial aircraft engine and most laboratory settings.

  12. A Finite Element Propagation Model for Extracting Normal Incidence Impedance in Nonprogressive Acoustic Wave Fields

    NASA Astrophysics Data System (ADS)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1996-04-01

    A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme were computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressive wave environment such as that usually encountered in a commercial aircraft engine and in most laboratory settings.

  13. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  14. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  15. Knowledge-based topographic feature extraction in medical images

    NASA Astrophysics Data System (ADS)

    Qian, JianZhong; Khair, Mohammad M.

    1995-08-01

    Diagnostic medical imaging often contains variations of patient anatomies, camera mispositioning, or other imperfect imaging condiitons. These variations contribute to uncertainty about shapes and boundaries of objects in images. As the results sometimes image features, such as traditional edges, may not be identified reliably and completely. We describe a knowledge based system that is able to reason about such uncertainties and use partial and locally ambiguous information to infer about shapes and lcoation of objects in an image. The system uses directional topographic features (DTFS), such as ridges and valleys, labeled from the underlying intensity surface to correlate to the intrinsic anatomical information. By using domain specific knowledge, the reasoning system can deduce significant anatomical landmarks based upon these DTFS, and can cope with uncertainties and fill in missing information. A succession of levels of representation for visual information and an active process of uncertain reasoning about this visual information are employed to realiably achieve the goal of image analysis. These landmarks can then be used in localization of anatomy of interest, image registration, or other clinical processing. The successful application of this system to a large set of planar cardiac images of nuclear medicine studies has demonstrated its efficiency and accuracy.

  16. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  17. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  18. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  19. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  20. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  1. Autonomous Time-Frequency Cropping and Feature-Extraction Algorithms for Classification of LPI Radar Modulations

    DTIC Science & Technology

    2006-06-01

    INTERCEPT ( LPI ) SIGNAL MODULATIONS In this chapter nine LPI radar modulations are described: FMCW , Frank, P1, P2, P3, P4, T1(n), T2(n). Although not a LPI ...FREQUENCY CROPPING AND FEATURE-EXTRACTION ALGORITHMS FOR CLASSIFICATION OF LPI RADAR MODULATIONS by Eric R. Zilberman June 2006 Thesis...and Feature- Extraction Algorithms for Classification of LPI Radar Modulations 6. AUTHOR Eric R. Zilberman 5. FUNDING NUMBERS 7. PERFORMING

  2. Magnetic brain response mirrors extraction of phonological features from spoken vowels.

    PubMed

    Obleser, Jonas; Lahiri, Aditi; Eulitz, Carsten

    2004-01-01

    This study further elucidates determinants of vowel perception in the human auditory cortex. The vowel inventory of a given language can be classified on the basis of phonological features which are closely linked to acoustic properties. A cortical representation of speech sounds based on these phonological features might explain the surprisingly inverse correlation between immense variance in the acoustic signal and high accuracy of speech recognition. We investigated timing and mapping of the N100m elicited by 42 tokens of seven natural German vowels varying along the phonological features tongue height (corresponding to the frequency of the first formant) and place of articulation (corresponding to the frequency of the second and third formants). Auditory evoked fields were recorded using a 148-channel whole-head magnetometer while subjects performed target vowel detection tasks. Source location differences appeared to be driven by place of articulation: Vowels with mutually exclusive place of articulation features, namely, coronal and dorsal elicited separate centers of activation along the posterior-anterior axis. Additionally, the time course of activation as reflected in the N100m peak latency distinguished between vowel categories especially when the spatial distinctiveness of cortical activation was low. In sum, results suggest that both N100m latency and source location as well as their interaction reflect properties of speech stimuli that correspond to abstract phonological features.

  3. Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language.

    PubMed

    Shanableh, Tamer; Assaleh, Khaled; Al-Rousan, M

    2007-06-01

    This paper presents various spatio-temporal feature-extraction techniques with applications to online and offline recognitions of isolated Arabic Sign Language gestures. The temporal features of a video-based gesture are extracted through forward, backward, and bidirectional predictions. The prediction errors are thresholded and accumulated into one image that represents the motion of the sequence. The motion representation is then followed by spatial-domain feature extractions. As such, the temporal dependencies are eliminated and the whole video sequence is represented by a few coefficients. The linear separability of the extracted features is assessed, and its suitability for both parametric and nonparametric classification techniques is elaborated upon. The proposed feature-extraction scheme was complemented by simple classification techniques, namely, K nearest neighbor (KNN) and Bayesian, i.e., likelihood ratio, classifiers. Experimental results showed classification performance ranging from 97% to 100% recognition rates. To validate our proposed technique, we have conducted a series of experiments using the classical way of classifying data with temporal dependencies, namely, hidden Markov models (HMMs). Experimental results revealed that the proposed feature-extraction scheme combined with simple KNN or Bayesian classification yields comparable results to the classical HMM-based scheme. Moreover, since the proposed scheme compresses the motion information of an image sequence into a single image, it allows for using simple classification techniques where the temporal dimension is eliminated. This is actually advantageous for both computational and storage requirements of the classifier.

  4. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  5. Is there a correlation between Japanese L2 learner's perception of English stressed words and acoustic features?

    NASA Astrophysics Data System (ADS)

    Asano, Keiko; Isei-Jakkola, Toshiko

    2003-10-01

    Is there a correlation between Japanese L2 learner's perception of English stressed words and acoustic features? [Keiko Asano (Yokohama National University, ll-ed@ynu.ac.jp) and Toshiko Isei-jaakkola (University of Helsinki)]. It is well known that the Japanese have weakness in listening to unstressed words in English, but there are less data on their perception of stressed words. Thus, the listening tests and the acoustic experiments were conducted in terms of (1) relevancy of difficulites depending on part of speech and their English proficiency, (2) the relationship between pitch and intensity of stressed words, and (3) if there is a correlation between their perception and experimental data. In the listening test, an English prose read by an American male speaker was used. The 150 Japanese L2 learners were assigned to mark the primary stressed words. The statistical results showed that there was a variance depending on part of speech and more markedly the comparative rating scores of correct words were highly correlated to the learner's English proficiency in any part of speech. In the acoustic experiments, pitch and intensity were measured. It was confirmed that (1) both F0 and dB carried the cue to perceive a stressed-word but they were not necessarily correlated, and (2) the relationship between F0 and dB might be compared only by relative movement. By further analyzing these acoustic data, prosodic combination of F0 and dB might be relevant to the correct ratios of part of speech.

  6. The effect of speaking context on spectral- and cepstral-based acoustic features of normal voice.

    PubMed

    Lowell, Soren Y; Hylkema, Jennifer A

    2016-01-01

    The effect of speaking context on four cepstral- and spectral-based acoustic measures was investigated in 20 participants with normal voice. Speakers produced three different continuous speaking tasks that varied in duration and phonemic content. Cepstral and spectral measures that can be validly derived from continuous speech were computed across the three speaking contexts. Cepstral peak prominence (CPP), low/high spectral ratio, and the standard deviation (SD) of the low/high spectral ratio did not significantly differ across speaking contexts, and correlations for the first two measures were strong among the three speaking tasks. The SD of the CPP showed significant task differences, and relationships between the speaking contexts were generally moderate. These findings suggest that in speakers with normal voice, the differing phonemic content across several frequently used speaking stimuli minimally impacted group means for three clinically relevant cepstral- and spectral-based acoustic measures.

  7. Investigations of High Pressure Acoustic Waves in Resonators with Seal-like Features

    NASA Technical Reports Server (NTRS)

    Daniels, Christopher; Steinetz, Bruce; Finkbeiner, Joshua

    2003-01-01

    A conical resonator (having a dissonant acoustic design) was tested in four configurations: (1) baseline resonator with closed ends and no blockage, (2) closed resonator with internal blockage, (3) ventilated resonator with no blockage, and (4) ventilated resonator with an applied pressure differential. These tests were conducted to investigate the effects of blockage and ventilation holes on dynamic pressurization. Additionally, the investigation was to determine the ability of acoustic pressurization to impede flow through the resonator. In each of the configurations studied, the entire resonator was oscillated at the gas resonant frequency while dynamic pressure, static pressure, and temperature of the fluid were measured. In the final configuration, flow through the resonator was recorded for three oscillation conditions. Ambient condition air was used as the working fluid.

  8. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification.

    PubMed

    Baali, Hamza; Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J E

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain-computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling's [Formula: see text] statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%.

  9. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  10. The vocal repertoire of the domesticated zebra finch: a data-driven approach to decipher the information-bearing acoustic features of communication signals.

    PubMed

    Elie, Julie E; Theunissen, Frédéric E

    2016-03-01

    Although a universal code for the acoustic features of animal vocal communication calls may not exist, the thorough analysis of the distinctive acoustical features of vocalization categories is important not only to decipher the acoustical code for a specific species but also to understand the evolution of communication signals and the mechanisms used to produce and understand them. Here, we recorded more than 8000 examples of almost all the vocalizations of the domesticated zebra finch, Taeniopygia guttata: vocalizations produced to establish contact, to form and maintain pair bonds, to sound an alarm, to communicate distress or to advertise hunger or aggressive intents. We characterized each vocalization type using complete representations that avoided any a priori assumptions on the acoustic code, as well as classical bioacoustics measures that could provide more intuitive interpretations. We then used these acoustical features to rigorously determine the potential information-bearing acoustical features for each vocalization type using both a novel regularized classifier and an unsupervised clustering algorithm. Vocalization categories are discriminated by the shape of their frequency spectrum and by their pitch saliency (noisy to tonal vocalizations) but not particularly by their fundamental frequency. Notably, the spectral shape of zebra finch vocalizations contains peaks or formants that vary systematically across categories and that would be generated by active control of both the vocal organ (source) and the upper vocal tract (filter).

  11. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  12. Simulation study and guidelines to generate Laser-induced Surface Acoustic Waves for human skin feature detection

    NASA Astrophysics Data System (ADS)

    Li, Tingting; Fu, Xing; Chen, Kun; Dorantes-Gonzalez, Dante J.; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2015-12-01

    Despite the seriously increasing number of people contracting skin cancer every year, limited attention has been given to the investigation of human skin tissues. To this regard, Laser-induced Surface Acoustic Wave (LSAW) technology, with its accurate, non-invasive and rapid testing characteristics, has recently shown promising results in biological and biomedical tissues. In order to improve the measurement accuracy and efficiency of detecting important features in highly opaque and soft surfaces such as human skin, this paper identifies the most important parameters of a pulse laser source, as well as provides practical guidelines to recommended proper ranges to generate Surface Acoustic Waves (SAWs) for characterization purposes. Considering that melanoma is a serious type of skin cancer, we conducted a finite element simulation-based research on the generation and propagation of surface waves in human skin containing a melanoma-like feature, determine best pulse laser parameter ranges of variation, simulation mesh size and time step, working bandwidth, and minimal size of detectable melanoma.

  13. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Gareis, I.; Gentiletti, G.; Acevedo, R.; Rufiner, L.

    2011-09-01

    The purpose of this work is to evaluate different feature extraction alternatives to detect the event related evoked potential signal on brain computer interfaces, trying to minimize the time employed and the classification error, in terms of sensibility and specificity of the method, looking for alternatives to coherent averaging. In this context the results obtained performing the feature extraction using discrete dyadic wavelet transform using different mother wavelets are presented. For the classification a single layer perceptron was used. The results obtained with and without the wavelet decomposition were compared; showing an improvement on the classification rate, the specificity and the sensibility for the feature vectors obtained using some mother wavelets.

  14. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    PubMed Central

    Yu, Mei; Feng, Qianjin; Yang, Wei; Gao, Yang; Chen, Wufan

    2012-01-01

    The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT) scans. The algorithm includes mainly two processes: (1) distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2) representation using bag of visual words (BoW) based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%. PMID:22988480

  15. Classification of mammographic masses: influence of regions used for feature extraction on the classification performance

    NASA Astrophysics Data System (ADS)

    Wagner, Florian; Wittenberg, Thomas; Elter, Matthias

    2010-03-01

    Computer-assisted diagnosis (CADx) for the characterization of mammographic masses as benign or malignant has a very high potential to help radiologists during the critical process of diagnostic decision making. By default, the characterization of mammographic masses is performed by extracting features from a region of interest (ROI) depicting the mass. To investigate the influence of the region on the classification performance, textural, morphological, frequency- as well as moment-based features are calculated in subregions of the ROI, which has been delineated manually by an expert. The investigated subregions are (a) the semi-automatically segmented area which includes only the core of the mass, (b) the outer border region of the mass, and (c) the combination of the outer and the inner border region, referred to as mass margin. To extract the border region and the margin of a mass an extended version of the rubber band straightening transform (RBST) was developed. Furthermore, the effectiveness of the features extracted from the RBST transformed border region and mass margin is compared to the effectiveness of the same features extracted from the untransformed regions. After the feature extraction process a preferably optimal feature subset is selected for each feature extractor. Classification is done using a k-NN classifier. The classification performance was evaluated using the area Az under the receiver operating characteristic curve. A publicly available mammography database was used as data set. Results showed that the manually drawn ROI lead to superior classification performances for the morphological feature extractors and that the transformed outer border region and the mass margin are not suitable for moment-based features but yield to promising results for textural and frequency-based features. Beyond that the mass margin, which combines the inner and the outer border region, leads to better classification performances compared to the outer border

  16. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  17. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  18. Uncertainty analysis of quantitative imaging features extracted from contrast-enhanced CT in lung tumors

    PubMed Central

    Yang, Jinzhong; Zhang, Lifei; Fave, Xenia J.; Fried, David V.; Stingo, Francesco C.; Ng, Chaan S.; Court, Laurence E.

    2016-01-01

    Purpose To assess the uncertainty of quantitative imaging features extracted from contrast-enhanced computed tomography (CT) scans of lung cancer patients in terms of the dependency on the time after contrast injection and the feature reproducibility between scans. Methods Eight patients underwent contrast-enhanced CT scans of lung tumors on two sessions 2–7 days apart. Each session included 6 CT scans of the same anatomy taken every 15 seconds, starting 50 seconds after contrast injection. Image features based on intensity histogram, co-occurrence matrix, neighborhood gray-tone difference matrix, run-length matrix, and geometric shape were extracted from the tumor for each scan. Spearman’s correlation was used to examine the dependency of features on the time after contrast injection, with values over 0.50 considered time-dependent. Concordance correlation coefficients were calculated to examine the reproducibility of each feature between times of scans after contrast injection and between scanning sessions, with values greater than 0.90 considered reproducible. Results The features were found to have little dependency on the time between the contrast injection and the CT scan. Most features were reproducible between times of scans after contrast injection and between scanning sessions. Some features were more reproducible when they were extracted from a CT scan performed at a longer time after contrast injection. Conclusion The quantitative imaging features tested here are mostly reproducible and show little dependency on the time after contrast injection. PMID:26745258

  19. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  20. Features of Propagation of the Acoustic-Gravity Waves Generated by High-Power Periodic Radiation

    NASA Astrophysics Data System (ADS)

    Chernogor, L. F.; Frolov, V. L.

    2013-09-01

    We present the results of the bandpass filtering of temporal variations of the Doppler frequency shift of radio signals from a vertical-sounding Doppler radar located near the city of Kharkov when the ionosphere was heated by high-power periodic (with 10 and 15-min periods) radiation from the Sura facility. The filtering was done in the ranges of periods that are close to the acoustic cutoff period and the Brunt—Väisälä period (4-6, 8-12, and 13-17 min). Oscillations with periods of 4-6 min and amplitudes of 50-100 mHz were not recorded in fact. Oscillations with periods of 8-12 and 13-17 min and amplitudes of 60-100 mHz were detected in almost all the sessions. In the former and the latter oscillations, the time of delay with respect to the heater switch-on was close to 100 min and about 40-50 min, respectively. These values correspond to group propagation velocities of about 160 and 320-400 m/s. The Doppler shift oscillations were caused by the acoustic-gravity waves which led to periodic variations in the electron number density with a relative amplitude of about 0.1-1.0%. It was demonstrated that the acoustic-gravity waves were not recorded when the effective power of the Sura facility was equal to 50 MW and they were confidently observed when the effective power was increased up to 130 MW. It is shown that the period of the wave processes was determined by the period of the heating-pause cycles, and the duration of the wave trains did not depend on the duration of the series of heating-pause cycles. The data suggest that the generation mechanism of recorded wave disturbances is different from the mechanism proposed in 1970-1990.

  1. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  2. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction.

    PubMed

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-06-29

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring.

  3. Feature extraction from terahertz pulses for classification of RNA data via support vector machines

    NASA Astrophysics Data System (ADS)

    Yin, Xiaoxia; Ng, Brian W.-H.; Fischer, Bernd; Ferguson, Bradley; Mickan, Samuel P.; Abbott, Derek

    2006-12-01

    This study investigates binary and multiple classes of classification via support vector machines (SVMs). A couple of groups of two dimensional features are extracted via frequency orientation components, which result in the effective classification of Terahertz (T-ray) pulses for discrimination of RNA data and various powder samples. For each classification task, a pair of extracted feature vectors from the terahertz signals corresponding to each class is viewed as two coordinates and plotted in the same coordinate system. The current classification method extracts specific features from the Fourier spectrum, without applying an extra feature extractor. This method shows that SVMs can employ conventional feature extraction methods for a T-ray classification task. Moreover, we discuss the challenges faced by this method. A pairwise classification method is applied for the multi-class classification of powder samples. Plots of learning vectors assist in understanding the classification task, which exhibit improved clustering, clear learning margins, and least support vectors. This paper highlights the ability to use a small number of features (2D features) for classification via analyzing the frequency spectrum, which greatly reduces the computation complexity in achieving the preferred classification performance.

  4. Some acoustic features of nasal and nasalized vowels: a target for vowel nasalization.

    PubMed

    Feng, G; Castelli, E

    1996-06-01

    In order to characterize acoustic properties of nasal and nasalized vowels, these sounds will be considered as a dynamic trend from an oral configuration toward an [n]-like configuration. The latter can be viewed as a target for vowel nasalization. This target corresponds to the pharyngonasal tract and it can be modeled, with some simplifications, by a single tract without any parallel paths. Thus the first two resonance frequencies (at about 300 and 1000 Hz) characterize this target well. A series of measurements has been carried out in order to describe the acoustic characteristics of the target. Measured transfer functions confirm the resonator nature of the low-frequency peak. The introduction of such a target allows the conception of the nasal vowels as a trend beginning with a simple configuration, which is terminated in the same manner, so allowing the complex nasal phenomena to be bounded. A complete study of pole-zero evolutions for the nasalization of the 11 French vowels is presented. It allows the proposition of a common strategy for the nasalization of all vowels, so a true nasal vowel can be placed in this nasalization frame. The measured transfer functions for several French nasal vowels are also given.

  5. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  6. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  7. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  8. A Novel Feature Selection Strategy for Enhanced Biomedical Event Extraction Using the Turku System

    PubMed Central

    Xia, Jingbo; Fang, Alex Chengyu; Zhang, Xing

    2014-01-01

    Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES) is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE) algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion. PMID:24800214

  9. A novel feature selection strategy for enhanced biomedical event extraction using the Turku system.

    PubMed

    Xia, Jingbo; Fang, Alex Chengyu; Zhang, Xing

    2014-01-01

    Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES) is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE) algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion.

  10. Features of CO2 fracturing deduced from acoustic emission and microscopy in laboratory experiments

    NASA Astrophysics Data System (ADS)

    Ishida, Tsuyoshi; Chen, Youqing; Bennour, Ziad; Yamashita, Hiroto; Inui, Shuhei; Nagaya, Yuya; Naoi, Makoto; Chen, Qu; Nakayama, Yoshiki; Nagano, Yu

    2016-11-01

    We conducted hydraulic fracturing (HF) experiments on 170 mm cubic granite specimens with a 20 mm diameter central hole to investigate how fluid viscosity affects HF process and crack properties. In experiments using supercritical carbon dioxide (SC-CO2), liquid carbon dioxide (L-CO2), water, and viscous oil with viscosity of 0.051-336.6 mPa · s, we compared the results for breakdown pressure, the distribution and fracturing mechanism of acoustic emission, and the microstructure of induced cracks revealed by using an acrylic resin containing a fluorescent compound. Fracturing with low-viscosity fluid induced three-dimensionally sinuous cracks with many secondary branches, which seem to be desirable pathways for enhanced geothermal system, shale gas recovery, and other processes.

  11. Nonlinear ion-acoustic double-layers in electronegative plasmas with electrons featuring Tsallis distribution

    NASA Astrophysics Data System (ADS)

    Ghebache, Siham; Tribeche, Mouloud

    2016-04-01

    Weakly nonlinear ion-acoustic (IA) double-layers (DLs), which accompany electronegative plasmas composed of positive ions, negative ions, and nonextensive electrons are investigated. A generalized Korteweg-de Vries equation with a cubic nonlinearity is derived using a reductive perturbation method. Different types of electronegative plasmas inspired from the experimental studies of Ichiki et al. (2001) are discussed. It is shown that the IA wave phase velocity, in different mixtures of negative and positive ions, decreases as the nonextensive parameter q increases, before levelling-off at a constant value for larger q. Moreover, a relative increase of Q involves an enhancement of the IA phase velocity. Existence domains of either solitary waves or double-layers are then presented and their parametric dependence is determined. Owing to the electron nonextensivity, our present plasma model can admit compressive as well as rarefactive IA-DLs.

  12. Nonlinear features of ion acoustic shock waves in dissipative magnetized dusty plasma

    NASA Astrophysics Data System (ADS)

    Sahu, Biswajit; Sinha, Anjana; Roychoudhury, Rajkumar

    2014-10-01

    The nonlinear propagation of small as well as arbitrary amplitude shocks is investigated in a magnetized dusty plasma consisting of inertia-less Boltzmann distributed electrons, inertial viscous cold ions, and stationary dust grains without dust-charge fluctuations. The effects of dissipation due to viscosity of ions and external magnetic field, on the properties of ion acoustic shock structure, are investigated. It is found that for small amplitude waves, the Korteweg-de Vries-Burgers (KdVB) equation, derived using Reductive Perturbation Method, gives a qualitative behaviour of the transition from oscillatory wave to shock structure. The exact numerical solution for arbitrary amplitude wave differs somehow in the details from the results obtained from KdVB equation. However, the qualitative nature of the two solutions is similar in the sense that a gradual transition from KdV oscillation to shock structure is observed with the increase of the dissipative parameter.

  13. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.

  14. Feature extraction of the first difference of EMG time series for EMG pattern recognition.

    PubMed

    Phinyomark, Angkoon; Quaine, Franck; Charbonnier, Sylvie; Serviere, Christine; Tarpin-Bernard, Franck; Laurillau, Yann

    2014-11-01

    This paper demonstrates the utility of a differencing technique to transform surface EMG signals measured during both static and dynamic contractions such that they become more stationary. The technique was evaluated by three stationarity tests consisting of the variation of two statistical properties, i.e., mean and standard deviation, and the reverse arrangements test. As a result of the proposed technique, the first difference of EMG time series became more stationary compared to the original measured signal. Based on this finding, the performance of time-domain features extracted from raw and transformed EMG was investigated via an EMG classification problem (i.e., eight dynamic motions and four EMG channels) on data from 18 subjects. The results show that the classification accuracies of all features extracted from the transformed signals were higher than features extracted from the original signals for six different classifiers including quadratic discriminant analysis. On average, the proposed differencing technique improved classification accuracies by 2-8%.

  15. Focal-plane CMOS wavelet feature extraction for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Olyaei, Ashkan; Genov, Roman

    2005-09-01

    Kernel-based pattern recognition paradigms such as support vector machines (SVM) require computationally intensive feature extraction methods for high-performance real-time object detection in video. The CMOS sensory parallel processor architecture presented here computes delta-sigma (ΔΣ)-modulated Haar wavelet transform on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental oversampling analog-to-digital converters (ADCs). Each ADC performs distributed spatial focal-plane sampling and concurrent weighted average quantization. The architecture is benchmarked in SVM face detection on the MIT CBCL data set. At 90% detection rate, first-level Haar wavelet feature extraction yields a 7.9% reduction in the number of false positives when compared to classification with no feature extraction. The architecture yields 1.4 GMACS simulated computational throughput at SVGA imager resolution at 8-bit output depth.

  16. Sparse and low-rank feature extraction for the classification of target's tracking capability

    NASA Astrophysics Data System (ADS)

    Rasti, Behnood; Gudmundsson, Karl S.

    2016-09-01

    A feature extraction-based classification method is proposed in this paper for verifying the capability of human's neck in target tracking. Here, the target moves in predefined trajectory patterns in three difficulty levels. Dataset used for each pattern is obtained from two groups of people, one with whiplash associated disorder (WAD) and asymptomatic group, who behave in both sincere and feign manner. The aim is to verify the WAD group from asymptomatic one and also to discriminate the sincere behavior from the feigned one. Sparse and low-rank feature extraction is proposed to extract the most informative feature from training samples and then each sample is classified into the group which has the highest correlation coefficient with. The classification results are improved by fusing the results of the three patterns.

  17. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  18. Real-time face detection and lip feature extraction using field-programmable gate arrays.

    PubMed

    Nguyen, Duy; Halupka, David; Aarabi, Parham; Sheikholeslami, Ali

    2006-08-01

    This paper proposes a new technique for face detection and lip feature extraction. A real-time field-programmable gate array (FPGA) implementation of the two proposed techniques is also presented. Face detection is based on a naive Bayes classifier that classifies an edge-extracted representation of an image. Using edge representation significantly reduces the model's size to only 5184 B, which is 2417 times smaller than a comparable statistical modeling technique, while achieving an 86.6% correct detection rate under various lighting conditions. Lip feature extraction uses the contrast around the lip contour to extract the height and width of the mouth, metrics that are useful for speech filtering. The proposed FPGA system occupies only 15050 logic cells, or about six times less than a current comparable FPGA face detection system.

  19. Features of the Acoustic Mechanism of Core-Collapse Supernova Explosions

    NASA Astrophysics Data System (ADS)

    Burrows, A.; Livne, E.; Dessart, L.; Ott, C. D.; Murphy, J.

    2007-01-01

    In the context of 2D, axisymmetric, multigroup, radiation/hydrodynamic simulations of core-collapse supernovae over the full 180° domain, we present an exploration of the progenitor dependence of the acoustic mechanism of explosion. All progenitor models we have tested with our Newtonian code explode. However, some of the cores left behind in our simulations, particularly for the more massive progenitors, have baryon masses that are larger than the canonical ~1.5 Msolar of well-measured pulsars. We investigate the roles of the standing accretion shock instability (SASI), the excitation of core g-modes, the generation of core acoustic power, the ejection of matter with r-process potential, the windlike character of the explosion, and the fundamental anisotropy of the blasts. We find that the breaking of spherical symmetry is central to the supernova phenomenon, the delays to explosion can be long, and the blasts, when top-bottom asymmetric, are self-collimating. We see indications that the initial explosion energies are larger for the more massive progenitors and smaller for the less massive progenitors and that the neutrino contribution to the explosion energy may be an increasing function of progenitor mass. However, the explosion energy is still accumulating by the end of our simulations and has not converged to final values. The degree of explosion asymmetry we obtain is completely consistent with that inferred from the polarization measurements of Type Ic supernovae. Furthermore, we calculate for the first time the magnitude and sign of the net impulse on the core due to anisotropic neutrino emission and suggest that hydrodynamic and neutrino recoils in the context of our asymmetric explosions afford a natural mechanism for observed pulsar proper motions.

  20. Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis.

    PubMed

    Chen, Yunhua; Liu, Weijian; Zhang, Ling; Yan, Mingyu; Zeng, Yanjun

    2015-09-01

    Due to an absence of reliable biochemical markers, the diagnosis of chronic fatigue syndrome (CFS) mainly relies on the clinical symptoms, and the experience and skill of the doctors currently. To improve objectivity and reduce work intensity, a hybrid facial feature is proposed. First, several kinds of appearance features are identified in different facial regions according to clinical observations of traditional Chinese medicine experts, including vertical striped wrinkles on the forehead, puffiness of the lower eyelid, the skin colour of the cheeks, nose and lips, and the shape of the mouth corner. Afterwards, such features are extracted and systematically combined to form a hybrid feature. We divide the face into several regions based on twelve active appearance model (AAM) feature points, and ten straight lines across them. Then, Gabor wavelet filtering, CIELab color components, threshold-based segmentation and curve fitting are applied to extract features, and Gabor features are reduced by a manifold preserving projection method. Finally, an AdaBoost based score level fusion of multi-modal features is performed after classification of each feature. Despite that the subjects involved in this trial are exclusively Chinese, the method achieves an average accuracy of 89.04% on the training set and 88.32% on the testing set based on the K-fold cross-validation. In addition, the method also possesses desirable sensitivity and specificity on CFS prediction.

  1. Automation of lidar-based hydrologic feature extraction workflows using GIS

    NASA Astrophysics Data System (ADS)

    Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.

    2016-10-01

    With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.

  2. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  3. Robust Feature Extraction Using Variable Window Function in Autocorrelation Domain for Speech Recognition

    NASA Astrophysics Data System (ADS)

    Lee, Sangho; Ha, Jeonghyun; Hong, Jaekeun

    This paper presents a new feature extraction method for robust speech recognition based on the autocorrelation mel frequency cepstral coefficients (AMFCCs) and a variable window. While the AMFCC feature extraction method uses the fixed double-dynamic-range (DDR) Hamming window for higher-lag autocorrelation coefficients, which are least affected by noise, the proposed method applies a variable window, depending on the frame energy and periodicity. The performance of the proposed method is verified using an Aurora-2 task, and the results confirm a significantly improved performance under noisy conditions.

  4. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    NASA Astrophysics Data System (ADS)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  5. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  6. Automatic Extraction and Coordination of Audit Data and Features for Intrusion and Damage Assessment

    DTIC Science & Technology

    2006-03-31

    03/31/06 Final Project Report 01/01/03-03/31/06 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Automatic Extraction and Coordination of Audit Data and...Features for Intrusion and Damage Assessment 5b. GRANTNUMBER F49620-03-1-0109 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Dr. Nong Ye 5e...assessing the damage through automatic extraction and coordination of audit data and features for intrusion and damage assessment. In this project , we

  7. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Astrophysics Data System (ADS)

    Li, Jian

    1994-09-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  8. Transient signal analysis based on Levenberg-Marquardt method for fault feature extraction of rotating machines

    NASA Astrophysics Data System (ADS)

    Wang, Shibin; Cai, Gaigai; Zhu, Zhongkui; Huang, Weiguo; Zhang, Xingwu

    2015-03-01

    Localized faults in rotating machines tend to result in shocks and thus excite transient components in vibration signals. An iterative extraction method is proposed for transient signal analysis based on transient modeling and parameter identification through Levenberg-Marquardt (LM) method, and eventually for fault feature extraction. For each iteration, a double-side asymmetric transient model is firstly built based on parametric Morlet wavelet, and then the LM method is introduced to identify the parameters of the model. With the implementation of the iterative procedure, transients are extracted from vibration signals one by one, and Wigner-Ville Distribution is applied to obtain time-frequency representation with satisfactory energy concentration but without cross-term. A simulation signal is used to test the performance of the proposed method in transient extraction, and the comparison study shows that the proposed method outperforms ensemble empirical mode decomposition and spectral kurtosis in extracting transient feature. Finally, the effectiveness of the proposed method is verified by the applications in transient analysis for bearing and gear fault feature extraction.

  9. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  10. Extraction of the acoustic component of a turbulent flow exciting a plate by inverting the vibration problem

    NASA Astrophysics Data System (ADS)

    Lecoq, D.; Pézerat, C.; Thomas, J.-H.; Bi, W. P.

    2014-06-01

    An improvement of the Force Analysis Technique (FAT), an inverse method of vibration, is proposed to identify the low wavenumbers including the acoustic component of a turbulent flow that excites a plate. This method is a significant progress since the usual techniques of measurements with flush-mounted sensors are not able to separate the acoustic and the aerodynamic energies of the excitation because the aerodynamic component is too high. Moreover, the main cause of vibration or acoustic radiation of the structure might be due to the acoustic part by a phenomenon of spatial coincidence between the acoustic wavelengths and those of the plate. This underlines the need to extract the acoustic part. In this work, numerical experiments are performed to solve both the direct and inverse problems of vibration. The excitation is a turbulent boundary layer and combines the pressure field of the Corcos model and a diffuse acoustic field. These pressures are obtained by a synthesis method based on the Cholesky decomposition of the cross-spectra matrices and are used to excite a plate. Thus, the application of the inverse problem FAT that requires only the vibration data shows that the method is able to identify and to isolate the acoustic part of the excitation. Indeed, the discretization of the inverse operator (motion equation of the plate) acts as a low-pass wavenumber filter. In addition, this method is simple to implement because it can be applied locally (no need to know the boundary conditions), and measurements can be carried out on the opposite side of the plate without affecting the flow. Finally, an improvement of FAT is proposed. It regularizes optimally and automatically the inverse problem by analyzing the mean quadratic pressure of the reconstructed force distribution. This optimized FAT, in the case of the turbulent flow, has the advantage of measuring the acoustic component up to higher frequencies even in the presence of noise. the aerodynamic component

  11. Feature Extraction for Bearing Prognostics and Health Management (PHM) - A Survey (Preprint)

    DTIC Science & Technology

    2008-05-01

    typically consists of core functions, such as, anomaly detection, fault diagnosis , prognosis, and decision-making. Sensors in the sensing module of a...used the three models for model-based bearing fault diagnosis , not explicitly feature extraction purpose. Recent direction for model-based feature...is another widely used frequency domain technique for bearing fault diagnosis . Envelope analysis consists of two steps: band-pass filtering and

  12. Study of Acoustic Features of Newborn Cries that Correlate with the Context

    DTIC Science & Technology

    2007-11-02

    contexts, are also shown. TABLE III MEAN AND STANDART DESVIATION FROM THE PARAMETERS EXTRACTED FROM CRY OF PAIN CONTEXT Sig F0 F1 F2 F3 Cry 21 486.58 1854...461.64 F0: fundamental frequency (Hz); F1: first formant (Hz); F2: second formant (Hz); F3: third formant (Hz). TABLE IV MEAN AND STANDART DESVIATION

  13. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    PubMed Central

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-01-01

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy. PMID:27941631

  14. Computer-aided diagnosis of rheumatoid arthritis with optical tomography, Part 1: feature extraction.

    PubMed

    Montejo, Ludguier D; Jia, Jingfei; Kim, Hyun K; Netz, Uwe J; Blaschke, Sabine; Müller, Gerhard A; Hielscher, Andreas H

    2013-07-01

    This is the first part of a two-part paper on the application of computer-aided diagnosis to diffuse optical tomography (DOT). An approach for extracting heuristic features from DOT images and a method for using these features to diagnose rheumatoid arthritis (RA) are presented. Feature extraction is the focus of Part 1, while the utility of five classification algorithms is evaluated in Part 2. The framework is validated on a set of 219 DOT images of proximal interphalangeal (PIP) joints. Overall, 594 features are extracted from the absorption and scattering images of each joint. Three major findings are deduced. First, DOT images of subjects with RA are statistically different (p<0.05) from images of subjects without RA for over 90% of the features investigated. Second, DOT images of subjects with RA that do not have detectable effusion, erosion, or synovitis (as determined by MRI and ultrasound) are statistically indistinguishable from DOT images of subjects with RA that do exhibit effusion, erosion, or synovitis. Thus, this subset of subjects may be diagnosed with RA from DOT images while they would go undetected by reviews of MRI or ultrasound images. Third, scattering coefficient images yield better one-dimensional classifiers. A total of three features yield a Youden index greater than 0.8. These findings suggest that DOT may be capable of distinguishing between PIP joints that are healthy and those affected by RA with or without effusion, erosion, or synovitis.

  15. System and method for investigating sub-surface features of a rock formation using compressional acoustic sources

    DOEpatents

    Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt; Johnson, Paul A.; Guyer, Robert; Ten Cate, James A.; Le Bas, Pierre-Yves; Larmat, Carene S.

    2016-09-27

    A system and method for investigating rock formations outside a borehole are provided. The method includes generating a first compressional acoustic wave at a first frequency by a first acoustic source; and generating a second compressional acoustic wave at a second frequency by a second acoustic source. The first and the second acoustic sources are arranged within a localized area of the borehole. The first and the second acoustic waves intersect in an intersection volume outside the borehole. The method further includes receiving a third shear acoustic wave at a third frequency, the third shear acoustic wave returning to the borehole due to a non-linear mixing process in a non-linear mixing zone within the intersection volume at a receiver arranged in the borehole. The third frequency is equal to a difference between the first frequency and the second frequency.

  16. Structural damage identification via a combination of blind feature extraction and sparse representation classification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2014-03-01

    This paper addresses two problems in structural damage identification: locating damage and assessing damage severity, which are incorporated into the classification framework based on the theory of sparse representation (SR) and compressed sensing (CS). The sparsity nature implied in the classification problem itself is exploited, establishing a sparse representation framework for damage identification. Specifically, the proposed method consists of two steps: feature extraction and classification. In the feature extraction step, the modal features of both the test structure and the reference structure model are first blindly extracted by the unsupervised complexity pursuit (CP) algorithm. Then in the classification step, expressing the test modal feature as a linear combination of the bases of the over-complete reference feature dictionary—constructed by concatenating all modal features of all candidate damage classes—builds a highly underdetermined linear system of equations with an underlying sparse representation, which can be correctly recovered by ℓ1-minimization; the non-zero entry in the recovered sparse representation directly assigns the damage class which the test structure (feature) belongs to. The two-step CP-SR damage identification method alleviates the training process required by traditional pattern recognition based methods. In addition, the reference feature dictionary can be of small size by formulating the issues of locating damage and assessing damage extent as a two-stage procedure and by taking advantage of the robustness of the SR framework. Numerical simulations and experimental study are conducted to verify the developed CP-SR method. The problems of identifying multiple damage, using limited sensors and partial features, and the performance under heavy noise and random excitation are investigated, and promising results are obtained.

  17. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  18. Feature extraction for EEG-based brain-computer interfaces by wavelet packet best basis decomposition.

    PubMed

    Yang, Bang-hua; Yan, Guo-zheng; Yan, Rong-guo; Wu, Ting

    2006-12-01

    A method based on wavelet packet best basis decomposition (WPBBD) is investigated for the purpose of extracting features of electroencephalogram signals produced during motor imagery tasks in brain-computer interfaces. The method includes the following three steps. (1) Original signals are decomposed by wavelet packet transform (WPT) and a wavelet packet library can be formed. (2) The best basis for classification is selected from the library. (3) Subband energies included in the best basis are used as effective features. Three different motor imagery tasks are discriminated using the features. The WPBBD produces a 70.3% classification accuracy, which is 4.2% higher than that of the existing wavelet packet method.

  19. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, P.; Belmont, P.; Foufoula, E.

    2011-12-01

    High resolution topography derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive landforms in a way that was previously not possible. This provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topography data over large areas come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. This is particularly true in low relief landscapes since the topographic gradients are low and both the landscape and the channel network are often heavily modified by humans. Recently, a comprehensive framework was developed for the automatic extraction of geomorphic features (channel network, channel heads and channel morphology) from high resolution topographic data by combining nonlinear diffusion and geodesic minimization principles. The feature extraction method was packaged in a software called GeoNet (which is publicly available). In this talk, we focus on the application of GeoNet to a variety of landscapes, and, in particular, to flat and engineered landscapes where the method has been recently extended to perform automated channel morphometric analysis (including extraction of cross-sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation) and to differentiate between natural channels and manmade structures (including artificial ditches, roads and bridges across channels).

  20. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  1. Automated hand thermal image segmentation and feature extraction in the evaluation of rheumatoid arthritis.

    PubMed

    Snekhalatha, U; Anburajan, M; Sowmiya, V; Venkatraman, B; Menaka, M

    2015-04-01

    The aim of the study was (1) to perform an automated segmentation of hot spot regions of the hand from thermograph using the k-means algorithm and (2) to test the potential of features extracted from the hand thermograph and its measured skin temperature indices in the evaluation of rheumatoid arthritis. Thermal image analysis based on skin temperature measurement, heat distribution index and thermographic index was analyzed in rheumatoid arthritis patients and controls. The k-means algorithm was used for image segmentation, and features were extracted from the segmented output image using the gray-level co-occurrence matrix method. In metacarpo-phalangeal, proximal inter-phalangeal and distal inter-phalangeal regions, the calculated percentage difference in the mean values of skin temperatures was found to be higher in rheumatoid arthritis patients (5.3%, 4.9% and 4.8% in MCP3, PIP3 and DIP3 joints, respectively) as compared to the normal group. k-Means algorithm applied in the thermal imaging provided better segmentation results in evaluating the disease. In the total population studied, the measured mean average skin temperature of the MCP3 joint was highly correlated with most of the extracted features of the hand. In the total population studied, the statistical feature extracted parameters correlated significantly with skin surface temperature measurements and measured temperature indices. Hence, the developed computer-aided diagnostic tool using MATLAB could be used as a reliable method in diagnosing and analyzing the arthritis in hand thermal images.

  2. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    PubMed

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  3. Feature extraction using adaptive multiwavelets and synthetic detection index for rotor fault diagnosis of rotating machinery

    NASA Astrophysics Data System (ADS)

    Lu, Na; Xiao, Zhihuai; Malik, O. P.

    2015-02-01

    State identification to diagnose the condition of rotating machinery is often converted to a classification problem of values of non-dimensional symptom parameters (NSPs). To improve the sensitivity of the NSPs to the changes in machine condition, a novel feature extraction method based on adaptive multiwavelets and the synthetic detection index (SDI) is proposed in this paper. Based on the SDI maximization principle, optimal multiwavelets are searched by genetic algorithms (GAs) from an adaptive multiwavelets library and used for extracting fault features from vibration signals. By the optimal multiwavelets, more sensitive NSPs can be extracted. To examine the effectiveness of the optimal multiwavelets, conventional methods are used for comparison study. The obtained NSPs are fed into K-means classifier to diagnose rotor faults. The results show that the proposed method can effectively improve the sensitivity of the NSPs and achieve a higher discrimination rate for rotor fault diagnosis than the conventional methods.

  4. Extraction of Subsurface Features from InSAR-Derived Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Xiong, Siting; Muller, Jan-Peter

    2015-05-01

    Microwave remote sensing has the potential to be a beneficial tool to detect and analyse subsurface features in desert areas due to its penetration ability over hyperarid regions with extremely low loss and low bulk humidity. Global Digital Elevation Models (DEMs) of resolution up to 30 m are now publicly available, some of which show subsurface features over these hyperarid areas. This study compares different elevations detected by different EO microwave and lidar profilers and demonstrates their effectiveness in terms of extraction of subsurface features compared with that delineated in ALOS/PALSAR polarisation map. Results show that SRTM-C DEM agrees closely with ICESat elevations and that SRTM-C DEM clearly show paleoriver features, some of which can’t be observed in ALOS/PALSAR images affected by background backscatter. However, craterlike features are more recognisable in ALOS/PALSAR images compared with SRTM-C DEM.

  5. A Novel Hyperspectral Feature-Extraction Algorithm Based on Waveform Resolution for Raisin Classification.

    PubMed

    Zhao, Yun; Xu, Xing; He, Yong

    2015-12-01

    Near-infrared hyperspectral imaging technology was adopted in this study to discriminate among varieties of raisins produced in Xinjiang Uygur Autonomous Region, China. Eight varieties of raisins were used in the research, and the wavelengths of the hyperspectral images were from 900 to 1700 nm. A novel waveform resolution method is proposed to reduce the hyperspectral data and extract the features. The waveform-resolution method compresses the original hyperspectral data for one pixel into five amplitudes, five frequencies, and five phases for 15 feature values in all. A neural network was established with three layers-eight neurons for the first layer, three neurons for the hidden layer, and one neuron for the output layer-based on the 15 features used to determine the varieties of raisins. The accuracies of the model, which are presented as sensitivity, precision, and specificity, for the testing data set, are 93.38, 81.92, and 99.06%. This is higher than the accuracies of the model using a conventional principal component analysis feature-extracting method combined with a neural network, which has a sensitivity of 82.13%, precision of 82.22%, and specificity of 97.45%. The results indicate that the proposed waveform-resolution feature-extracting method combined with hyperspectral imaging technology is an efficient method for determining varieties of raisins.

  6. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  7. Features extraction of EMG signal using time domain analysis for arm rehabilitation device

    NASA Astrophysics Data System (ADS)

    Jali, Mohd Hafiz; Ibrahim, Iffah Masturah; Sulaima, Mohamad Fani; Bukhari, W. M.; Izzuddin, Tarmizi Ahmad; Nasir, Mohamad Na'im

    2015-05-01

    Rehabilitation device is used as an exoskeleton for people who had failure of their limb. Arm rehabilitation device may help the rehab program whom suffers from arm disability. The device that is used to facilitate the tasks of the program should improve the electrical activity in the motor unit and minimize the mental effort of the user. Electromyography (EMG) is the techniques to analyze the presence of electrical activity in musculoskeletal systems. The electrical activity in muscles of disable person is failed to contract the muscle for movements. In order to prevent the muscles from paralysis becomes spasticity, the force of movements should minimize the mental efforts. Therefore, the rehabilitation device should analyze the surface EMG signal of normal people that can be implemented to the device. The signal is collected according to procedure of surface electromyography for non-invasive assessment of muscles (SENIAM). The EMG signal is implemented to set the movements' pattern of the arm rehabilitation device. The filtered EMG signal was extracted for features of Standard Deviation (STD), Mean Absolute Value (MAV) and Root Mean Square (RMS) in time-domain. The extraction of EMG data is important to have the reduced vector in the signal features with less of error. In order to determine the best features for any movements, several trials of extraction methods are used by determining the features with less of errors. The accurate features can be use for future works of rehabilitation control in real-time.

  8. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  9. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  10. New feature extraction approach for epileptic EEG signal detection using time-frequency distributions.

    PubMed

    Guerrero-Mosquera, Carlos; Trigueros, Armando Malanda; Franco, Jorge Iriarte; Navia-Vázquez, Angel

    2010-04-01

    This paper describes a new method to identify seizures in electroencephalogram (EEG) signals using feature extraction in time-frequency distributions (TFDs). Particularly, the method extracts features from the Smoothed Pseudo Wigner-Ville distribution using tracks estimated from the McAulay-Quatieri sinusoidal model. The proposed features are the length, frequency, and energy of the principal track. We evaluate the proposed scheme using several datasets and we compute sensitivity, specificity, F-score, receiver operating characteristics (ROC) curve, and percentile bootstrap confidence to conclude that the proposed scheme generalizes well and is a suitable approach for automatic seizure detection at a moderate cost, also opening the possibility of formulating new criteria to detect, classify or analyze abnormal EEGs.

  11. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images.

    PubMed

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction.

  12. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  13. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  14. Lexical and Acoustic Features of Maternal Utterances Addressing Preverbal Infants in Picture Book Reading Link to 5-Year-Old Children's Language Development

    ERIC Educational Resources Information Center

    Liu, Huei-Mei

    2014-01-01

    Research Findings: I examined the long-term association between the lexical and acoustic features of maternal utterances during book reading and the language skills of infants and children. Maternal utterances were collected from 22 mother-child dyads in picture book-reading episodes when children were ages 6-12 months and 5 years. Two aspects of…

  15. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  16. [Feature extraction of motor imagery electroencephalography based on time-frequency-space domains].

    PubMed

    Wang, Yueru; Li, Xin; Li, Honghong; Shao, Chengcheng; Ying, Lijuan; Wu, Shuicai

    2014-10-01

    The purpose of using brain-computer interface (BCI) is to build a bridge between brain and computer for the disable persons, in order to help them to communicate with the outside world. Electroencephalography (EEG) has low signal to noise ratio (SNR), and there exist some problems in the traditional methods for the feature extraction of EEG, such as low classification accuracy, lack of spatial information and huge amounts of features. To solve these problems, we proposed a new method based on time domain, frequency domain and space domain. In this study, independent component analysis (ICA) and wavelet transform were used to extract the temporal, spectral and spatial features from the original EEG signals, and then the extracted features were classified with the method combined support vector machine (SVM) with genetic algorithm (GA). The proposed method displayed a better classification performance, and made the mean accuracy of the Graz datasets in the BCI Competitions of 2003 reach 96%. The classification results showed that the proposed method with the three domains could effectively overcome the drawbacks of the traditional methods based solely on time-frequency domain when the EEG signals were used to describe the characteristics of the brain electrical signals.

  17. Bayesian Nonnegative CP Decomposition-based Feature Extraction Algorithm for Drowsiness Detection.

    PubMed

    Qian, Dong; Wang, Bei; Qing, Yun; Zhang, Tao; Zhang, Yu; Wang, Xing; Nakamura, Masatoshi

    2016-10-19

    Daytime short nap involves physiological processes, such as alertness, drowsiness and sleep. The study of the relationship between drowsiness and nap based on physiological signals is a great way to have a better understanding of the periodical rhymes of physiological states. A model of Bayesian nonnegative CP decomposition (BNCPD) was proposed to extract common multiway features from the group-level electroencephalogram (EEG) signals. As an extension of the nonnegative CP decomposition, the BNCPD model involves prior distributions of factor matrices, while the underlying CP rank could be determined automatically based on a Bayesian nonparametric approach. In terms of computational speed, variational inference was applied to approximate the posterior distributions of unknowns. Extensive simulations on the synthetic data illustrated the capability of our model to recover the true CP rank. As a real-world application, the performance of drowsiness detection during daytime short nap by using the BNCPD-based features was compared with that of other traditional feature extraction methods. Experimental results indicated that the BNCPD model outperformed other methods for feature extraction in terms of two evaluation metrics, as well as different parameter settings. Our approach is likely to be a useful tool for automatic CP rank determination and offering a plausible multiway physiological information of individual states.

  18. Application of Wavelet Analysis in EMG Feature Extraction for Pattern Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Limsakul, C.; Phukpattaranont, P.

    2011-01-01

    Nowadays, analysis of electromyography (EMG) signal using wavelet transform is one of the most powerful signal processing tools. It is widely used in the EMG recognition system. In this study, we have investigated usefulness of extraction of the EMG features from multiple-level wavelet decomposition of the EMG signal. Different levels of various mother wavelets were used to obtain the useful resolution components from the EMG signal. Optimal EMG resolution component (sub-signal) was selected and then the reconstruction of the useful information signal was done. Noise and unwanted EMG parts were eliminated throughout this process. The estimated EMG signal that is an effective EMG part was extracted with the popular features, i.e. mean absolute value and root mean square, in order to improve quality of class separability. Two criteria used in the evaluation are the ratio of a Euclidean distance to a standard deviation and the scatter graph. The results show that only the EMG features extracted from reconstructed EMG signals of the first-level and the second-level detail coefficients yield the improvement of class separability in feature space. It will ensure that the result of pattern classification accuracy will be as high as possible. Optimal wavelet decomposition is obtained using the seventh order of Daubechies wavelet and the forth-level wavelet decomposition.

  19. A Stable Biologically Motivated Learning Mechanism for Visual Feature Extraction to Handle Facial Categorization

    PubMed Central

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task. PMID:22719892

  20. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  1. Dynamic-Feature Extraction, Attribution and Reconstruction (DEAR) Method for Power System Model Reduction

    SciTech Connect

    Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.

    2014-09-04

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.

  2. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  3. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  4. A Rough Transform Technique for Extracting Lead Features from Sea Ice Imagery

    DTIC Science & Technology

    1989-07-11

    Funding Numbers. A ough Trclnsform lehiteFrExtracting Lead Features Program Efenrent~ 62No 5 From Sea Ice Imagery Prolec t No 13 219RK Author(s...compilIing l ead sta t ist ics from i _ge r .-. A Hlough transform technique for the semi-automated extraction of I1 at ’,rient.-ition and spacing is...Page. of Abstract. UI c_]ass i i (d [I/n g- sci f i fd Unc Iass iiHod______________ M,.NIS CRA&I 0TiC TAb C U;,jnflOj ’ t I d C]’ r A flROIIH

  5. High Resolution Urban Feature Extraction for Global Population Mapping using High Performance Computing

    SciTech Connect

    Vijayaraj, Veeraraghavan; Bright, Eddie A; Bhaduri, Budhendra L

    2007-01-01

    The advent of high spatial resolution satellite imagery like Quick Bird (0.6 meter) and IKONOS (1 meter) has provided a new data source for high resolution urban land cover mapping. Extracting accurate urban regions from high resolution images has many applications and is essential to the population mapping efforts of Oak Ridge National Laboratory's (ORNL) LandScan population distribution program. This paper discusses an automated parallel algorithm that has been implemented on a high performance computing environment to extract urban regions from high resolution images using texture and spectral features

  6. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals

    NASA Astrophysics Data System (ADS)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L.

    2016-05-01

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  7. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  8. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    SciTech Connect

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated data with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.

  9. Impact of postharvest dehydration process of winegrapes on mechanical and acoustic properties of the seeds and their relationship with flavanol extraction during simulated maceration.

    PubMed

    Río Segade, Susana; Torchio, Fabrizio; Gerbi, Vincenzo; Quijada-Morín, Natalia; García-Estévez, Ignacio; Giacosa, Simone; Escribano-Bailón, M Teresa; Rolle, Luca

    2016-05-15

    This study represents the first time that the extraction of phenolic compounds from the seeds is assessed from instrumental texture properties for dehydrated grapes. Nebbiolo winegrapes were postharvest dehydrated at 20°C and 41% relative humidity. During the dehydration process, sampling was performed at 15%, 30%, 45% and 60% weight loss. The extractable fraction and extractability of phenolic compounds from the seeds were determined after simulated maceration. The evolution of mechanical and acoustic attributes of intact seeds was also determined during grape dehydration to evaluate how these changes affected the extraction of phenolic compounds. The extractable content and extractability of monomeric flavanols and proanthocyanidins, as well as the galloylation percentage of flavanols, might be predicted easily and quickly from the mechanical and acoustic properties of intact seeds. This would help in decision-making on the optimal dehydration level of winegrapes and the best management of winemaking of dehydrated grapes.

  10. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

    NASA Astrophysics Data System (ADS)

    Rajashekararadhya, S. V.; Ranjan, P. Vanaja

    India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

  11. Cloud Detection Method Based on Feature Extraction in Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Changhui, Y.; Yuan, Y.; Minjing, M.; Menglu, Z.

    2013-05-01

    In remote sensing images, the existence of the clouds has a great impact on the image quality and subsequent image processing, as the images covered with clouds contain little useful information. Therefore, the detection and recognition of clouds is one of the major problems in the application of remote sensing images. Present there are two categories of method to cloud detection. One is setting spectrum thresholds based on the characteristics of the clouds to distinguish them. However, the instability and uncertainty of the practical clouds makes this kind of method complexity and weak adaptability. The other method adopts the features in the images to identify the clouds. Since there will be significant overlaps in some features of the clouds and grounds, the detection result is highly dependent on the effectiveness of the features. This paper presented a cloud detection method based on feature extraction for remote sensing images. At first, find out effective features through training pattern, the features are selected from gray, frequency and texture domains. The different features in the three domains of the training samples are calculated. Through the result of statistical analysis of all the features, the useful features are picked up to form a feature set. In concrete, the set includes three feature vectors, respectively, the gray feature vector constituted of average gray, variance, first-order difference, entropy and histogram, the frequency feature vector constituted of DCT high frequency coefficient and wavelet high frequency coefficient, and the texture feature vector constituted of the hybrid entropy and difference of the gray-gradient co-occurrence matrix and the image fractal dimension. Secondly, a thumbnail will be obtained by down sampling the original image and its features of gray, frequency and texture are computed. Last but not least, the cloud region will be judged by the comparison between the actual feature values and the thresholds

  12. Product spectrum matrix feature extraction and recognition of radar deception jamming

    NASA Astrophysics Data System (ADS)

    Tian, Xiao; Tang, Bin; Gui, Guan

    2013-12-01

    A deception jamming recognition algorithm is proposed based on product spectrum matrix (SPM). Firstly, the product spectral in the different pulse repetition interval (PRI) is calculated, and the product spectral of frequency-slow time is arranged into a two-dimensional matrix. Secondly, non-negative matrix factorisation (NMF) is used to extract the features, and further the separability of the characteristic parameters is analysed by the F-Ratio. Finally, the best features are selected to recognise the deception jamming. The experimental results show that the average recognition accuracy of the proposed deception jamming algorithm is higher than 90% when SNR is greater than 6dB.

  13. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  14. Topology-based Simplification for Feature Extraction from 3D Scalar Fields

    SciTech Connect

    Gyulassy, A; Natarajan, V; Pascucci, V; Bremer, P; Hamann, B

    2005-10-13

    This paper describes a topological approach for simplifying continuous functions defined on volumetric domains. We present a combinatorial algorithm that simplifies the Morse-Smale complex by repeated application of two atomic operations that removes pairs of critical points. The Morse-Smale complex is a topological data structure that provides a compact representation of gradient flows between critical points of a function. Critical points paired by the Morse-Smale complex identify topological features and their importance. The simplification procedure leaves important critical points untouched, and is therefore useful for extracting desirable features. We also present a visualization of the simplified topology.

  15. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    SciTech Connect

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  16. Radiomics: extracting more information from medical images using advanced feature analysis.

    PubMed

    Lambin, Philippe; Rios-Velazquez, Emmanuel; Leijenaar, Ralph; Carvalho, Sara; van Stiphout, Ruud G P M; Granton, Patrick; Zegers, Catharina M L; Gillies, Robert; Boellard, Ronald; Dekker, André; Aerts, Hugo J W L

    2012-03-01

    Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics--the high-throughput extraction of large amounts of image features from radiographic images--addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory.

  17. Cartographic feature extraction with integrated SIR-B and Landsat TM images

    NASA Technical Reports Server (NTRS)

    Welch, R.; Ehlers, Manfred

    1988-01-01

    A digital cartographic multisensor image database of excellent geometry and improved resolution was created by registering SIR-B images to a rectified Landsat TM reference image and applying intensity-hue-saturation enhancement techniques. When evaluated against geodetic control, RMSE(XY) values of approximately + or - 20 m were noted for the composite SIR-B/TM images. The completeness of cartographic features extracted from the composite images exceeded those obtained from separate SIR-B and TM image data sets by approximately 10 and 25 percent, respectively, indicating that the composite images may prove suitable for planimetric mapping at a scale of 1:100,000 or smaller. At present, the most effective method for extracting cartographic information involves digitizing features directly from the image processing display screen.

  18. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  19. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.; Suhardjo; Susanto, Adhi

    2013-04-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  20. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  1. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  2. Special object extraction from medieval books using superpixels and bag-of-features

    NASA Astrophysics Data System (ADS)

    Yang, Ying; Rushmeier, Holly

    2017-01-01

    We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.

  3. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, Paola; Belmont, Patrick; Foufoula-Georgiou, Efi

    2012-03-01

    High-resolution topographic data derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive areas in a way that was previously not possible. Availability of this data provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models, and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topographic data come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. Low-relief landscapes are particularly challenging because topographic gradients are low, and in many places both the landscape and the channel network have been heavily modified by humans. This is especially true for agricultural landscapes, which dominate the midwestern United States. The goal of this work is to address several issues related to feature extraction in flat lands by using GeoNet, a recently developed method based on nonlinear multiscale filtering and geodesic optimization for automatic extraction of geomorphic features (channel heads and channel networks) from high-resolution topographic data. Here we test the ability of GeoNet to extract channel networks in flat and human-impacted landscapes using 3 m lidar data for the Le Sueur River Basin, a 2880 km2 subbasin of the Minnesota River Basin. We propose a curvature analysis to differentiate between channels and manmade structures that are not part of the river network, such as roads and bridges. We document that Laplacian curvature more effectively distinguishes channels in flat, human-impacted landscapes compared with geometric curvature. In addition, we develop a method for performing automated channel morphometric analysis including extraction of cross sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation. Using

  4. Effects of LiDAR Derived DEM Resolution on Hydrographic Feature Extraction

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ames, D. P.; Glenn, N. F.; Anderson, D.

    2010-12-01

    This paper examines the effect of LiDAR-derived digital elevation model (DEM) resolution on digitally extracted stream networks with respect to known stream channel locations. Two study sites, Reynolds Creek Experimental Watershed (RCEW) and Dry Creek Experimental Watershed (DCEW), which represent terrain characteristics for lower and intermediate elevation mountainous watersheds in the Intermountain West, were selected as study areas for this research. DEMs reflecting bare earth ground were created from the LiDAR observations at a series of raster cell sizes (from 1 m to 60 m) using spatial interpolation techniques. The effect of DEM resolution on resulting hydrographic feature (specifically stream channel) derivation was studied. Stream length, watershed area, and sinuosity were explored at each of the raster cell sizes. Also, variation from known channel location as estimated by root mean square error (RMSE) between surveyed channel location and extracted channel was computed for each of the DEMs and extracted stream networks. As expected, the results indicate that the DEM based hydrographic extraction process provides more detailed hydrographic features at a finer resolution. RMSE between the known channel location and modeled locations generally increased with larger cell size DEM with a greater effect in the larger RCEW. Sensitivity analyses on sinuosity demonstrated that the resulting shape of streams obtained from LiDAR data matched best with the reference data at an intermediate cell size instead of highest resolution, which is at a range of cell size from 5 to 10 m likely due to original point spacing, terrain characteristics, and LiDAR noise influence. More importantly, the absolute sinuosity deviation displayed a smallest value at the cell size of 10 m in both experimental watersheds, which suggests that optimal cell size for LiDAR-derived DEMs used for hydrographic feature extraction is 10 m.

  5. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  6. Performance Comparison of Feature Extraction Algorithms for Target Detection and Classification

    DTIC Science & Technology

    2013-01-01

    Succi, D. Clapp, R. Gampert, and G. Prado, “ Footstep detection and tracking,” Unattended Ground Sensor Technologies and Applications III, vol. 4393... Detection and Classification⋆ Soheil Bahrampour1 Asok Ray2 Soumalya Sarkar2 Thyagaraju Damarla3 Nasser M. Nasrabadi3 Keywords: Feature Extraction...rithm, symbolic dynamic filtering (SDF), is investigated for target detection and classification by using unmanned ground sensors (UGS). In SDF, sensor

  7. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  8. Identification and Classification of OFDM Based Signals Using Preamble Correlation and Cyclostationary Feature Extraction

    DTIC Science & Technology

    2009-09-01

    rapidly advancing technologies of wireless communication networks are providing enormous opportunities. A large number of users in emerging markets ...base element of the 802.16 frame is the physical slot, having the duration 4ps s t f  (2.10) where sf is the sampling frequency. The number of ...CLASSIFICATION OF OFDM BASED SIGNALS USING PREAMBLE CORRELATION AND CYCLOSTATIONARY FEATURE EXTRACTION by Steven R. Schnur September 2009

  9. Structural feature extraction protocol for classifying reversible membrane binding protein domains.

    PubMed

    Källberg, Morten; Lu, Hui

    2009-01-01

    Machine learning based classification protocols for automated function annotation of protein structures have in many instances proven superior to simpler sequence based procedures. Here we present an automated method for extracting features from protein structures by construction of surface patches to be used in such protocols. The utility of the developed patch-growing procedure is exemplified by its ability to identify reversible membrane binding domains from the C1, C2, and PH families.

  10. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  11. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    PubMed Central

    Ma, Zaichao; Wen, Guangrui; Jiang, Cheng

    2015-01-01

    Empirical Mode Decomposition (EMD), due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD), has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD) integrating Phase Space Reconstruction (PSR) and Manifold Learning (ML) for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise. PMID:25871723

  12. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  13. Exploration of Genetic Programming Optimal Parameters for Feature Extraction from Remote Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Gao, P.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation is used for improved information extraction from high-resolution satellite imagery. The utilization of evolutionary computation is based on stochastic selection of input parameters often defined in a trial-and-error approach. However, exploration of optimal input parameters can yield improved candidate solutions while requiring reduced computation resources. In this study, the design and implementation of a system that investigates the optimal input parameters was researched in the problem of feature extraction from remotely sensed imagery. The two primary assessment criteria were the highest fitness value and the overall computational time. The parameters explored include the population size and the percentage and order of mutation and crossover. The proposed system has two major subsystems; (i) data preparation: the generation of random candidate solutions; and (ii) data processing: evolutionary process based on genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background of remote sensed imagery. The results demonstrate that the optimal generation number is around 1500, the optimal percentage of mutation and crossover ranges from 35% to 40% and 5% to 0%, respectively. Based on our findings the sequence that yielded better results was mutation over crossover. These findings are conducive to improving the efficacy of utilizing genetic programming for feature extraction from remotely sensed imagery.

  14. Automatic Road Area Extraction from Printed Maps Based on Linear Feature Detection

    NASA Astrophysics Data System (ADS)

    Callier, Sebastien; Saito, Hideo

    Raster maps are widely available in the everyday life, and can contain a huge amount of information of any kind using labels, pictograms, or color code e.g. However, it is not an easy task to extract roads from those maps due to those overlapping features. In this paper, we focus on an automated method to extract roads by using linear features detection to search for seed points having a high probability to belong to roads. Those linear features are lines of pixels of homogenous color in each direction around each pixel. After that, the seeds are then expanded before choosing to keep or to discard the extracted element. Because this method is not mainly based on color segmentation, it is also suitable for handwritten maps for example. The experimental results demonstrate that in most cases our method gives results similar to usual methods without needing any previous data or user input, but do need some knowledge on the target maps; and does work with handwritten maps if drawn following some basic rules whereas usual methods fail.

  15. Extracting Features from an Electrical Signal of a Non-Intrusive Load Monitoring System

    NASA Astrophysics Data System (ADS)

    Figueiredo, Marisa B.; de Almeida, Ana; Ribeiro, Bernardete; Martins, António

    Improving energy efficiency by monitoring household electrical consumption is of significant importance with the present-day climate change concerns. A solution for the electrical consumption management problem is the use of a non-intrusive load monitoring system (NILM). This system captures the signals from the aggregate consumption, extracts the features from these signals and classifies the extracted features in order to identify the switched on appliances. An effective device identification (ID) requires a signature to be assigned for each appliance. Moreover, to specify an ID for each device, signal processing techniques are needed for extracting the relevant features. This paper describes a technique for the steady-states recognition in an electrical digital signal as the first stage for the implementation of an innovative NILM. Furthermore, the final goal is to develop an intelligent system for the identification of the appliances by automated learning. The proposed approach is based on the ratio value between rectangular areas defined by the signal samples. The computational experiments show the method effectiveness for the accurate steady-states identification in the electrical input signals.

  16. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  17. A parametric feature extraction and classification strategy for brain-computer interfacing.

    PubMed

    Burke, Dave P; Kelly, Simon P; de Chazal, Philip; Reilly, Richard B; Finucane, Ciarán

    2005-03-01

    Parametric modeling strategies are explored in conjunction with linear discriminant analysis for use in an electroencephalogram (EEG)-based brain-computer interface (BCI). A left/right self-paced typing exercise is analyzed by extending the usual autoregressive (AR) model for EEG feature extraction with an AR with exogenous input (ARX) model for combined filtering and feature extraction. The ensemble averaged Bereitschafts potential (an event related potential preceding the onset of movement) forms the exogenous signal input to the ARX model. Based on trials with six subjects, the ARX case of modeling both the signal and noise was found to be considerably more effective than modeling the noise alone (common in BCI systems) with the AR method yielding a classification accuracy of 52.8+/-4.8% and the ARX method an accuracy of 79.1+/-3.9 % across subjects. The results suggest a role for ARX-based feature extraction in BCIs based on evoked and event-related potentials.

  18. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  19. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    PubMed Central

    Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  20. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  1. Extraction of enclosure culture area from SPOT-5 image based on texture feature

    NASA Astrophysics Data System (ADS)

    Tang, Wei; Zhao, Shuhe; Ma, Ronghua; Wang, Chunhong; Zhang, Shouxuan; Li, Xinliang

    2007-06-01

    The east Taihu lake region is characterized by high-density and large areas of enclosure culture area which tend to cause eutrophication of the lake and worsen the quality of its water. This paper takes an area (380×380) of the east Taihu Lake from image as an example and discusses the extraction method of combing texture feature of high resolution image with spectrum information. Firstly, we choose the best combination bands of 1, 3, 4 according to the principles of the maximal entropy combination and OIF index. After applying algorithm of different bands and principal component analysis (PCA) transformation, we realize dimensional reduction and data compression. Subsequently, textures of the first principal component image are analyzed using Gray Level Co-occurrence Matrices (GLCM) getting statistic Eigen values of contrast, entropy and mean. The mean Eigen value is fixed as an optimal index and a appropriate conditional thresholds of extraction are determined. Finally, decision trees are established realizing the extraction of enclosure culture area. Combining the spectrum information with the spatial texture feature, we obtain a satisfied extracted result and provide a technical reference for a wide-spread survey of the enclosure culture area.

  2. Feature extraction of kernel regress reconstruction for fault diagnosis based on self-organizing manifold learning

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Xu, Guanghua; Liu, Dan

    2013-09-01

    The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension. Currently, nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings, such as manifold learning. However, these methods are all based on manual intervention, which have some shortages in stability, and suppressing the disturbance noise. To extract features automatically, a manifold learning method with self-organization mapping is introduced for the first time. Under the non-uniform sample distribution reconstructed by the phase space, the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation. Finally, the signal is reconstructed by the kernel regression. Several typical states include the Lorenz system, engine fault with piston pin defect, and bearing fault with outer-race defect are analyzed. Compared with the LTSA and continuous wavelet transform, the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified. A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.

  3. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  4. DBSCAN-based ROI extracted from SAR images and the discrimination of multi-feature ROI

    NASA Astrophysics Data System (ADS)

    He, Xin Yi; Zhao, Bo; Tan, Shu Run; Zhou, Xiao Yang; Jiang, Zhong Jin; Cui, Tie Jun

    2009-10-01

    The purpose of the paper is to extract the region of interest (ROI) from the coarse detected synthetic aperture radar (SAR) images and discriminate if the ROI contains a target or not, so as to eliminate the false alarm, and prepare for the target recognition. The automatic target clustering is one of the most difficult tasks in the SAR-image automatic target recognition system. The density-based spatial clustering of applications with noise (DBSCAN) relies on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN was first used in the SAR image processing, which has many excellent features: only two insensitivity parameters (radius of neighborhood and minimum number of points) are needed; clusters of arbitrary shapes which fit in with the coarse detected SAR images can be discovered; and the calculation time and memory can be reduced. In the multi-feature ROI discrimination scheme, we extract several target features which contain the geometry features such as the area discriminator and Radon-transform based target profile discriminator, the distribution characteristics such as the EFF discriminator, and the EM scattering property such as the PPR discriminator. The synthesized judgment effectively eliminates the false alarms.

  5. Feature extraction from 3D lidar point clouds using image processing methods

    NASA Astrophysics Data System (ADS)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  6. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.

  7. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  8. Extraction of features from sleep EEG for Bayesian assessment of brain development

    PubMed Central

    2017-01-01

    Brain development can be evaluated by experts analysing age-related patterns in sleep electroencephalograms (EEG). Natural variations in the patterns, noise, and artefacts affect the evaluation accuracy as well as experts’ agreement. The knowledge of predictive posterior distribution allows experts to estimate confidence intervals within which decisions are distributed. Bayesian approach to probabilistic inference has provided accurate estimates of intervals of interest. In this paper we propose a new feature extraction technique for Bayesian assessment and estimation of predictive distribution in a case of newborn brain development assessment. The new EEG features are verified within the Bayesian framework on a large EEG data set including 1,100 recordings made from newborns in 10 age groups. The proposed features are highly correlated with brain maturation and their use increases the assessment accuracy. PMID:28323852

  9. Handwritten Chinese character recognition based on supervised competitive learning neural network and block-based relative fuzzy feature extraction

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Wu, Shuanhu

    2005-02-01

    Offline handwritten chinese character recognition is still a difficult problem because of its large stroke changes, writing anomaly, and the difficulty for obtaining its stroke ranking information. Generally, offline handwritten chinese character can be divided into two procedures: feature extraction for capturing handwritten chinese character information and feature classifying for character recognition. In this paper, we proposed a new Chinese character recognition algorithm. In feature extraction part, we adopted elastic mesh dividing method for extracting the block features and its relative fuzzy features that utilized the relativities between different strokes and distribution probability of a stroke in its neighbor sub-blocks. In recognition part, we constructed a classifier based on a supervised competitive learning algorithm to train competitive learning neural network with the extracted features set. Experimental results show that the performance of our algorithm is encouraging and can be comparable to other algorithms.

  10. Concordance of computer-extracted image features with BI-RADS descriptors for mammographic mass margin

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Paramagul, Chintana; Nees, Alexis; Helvie, Mark; Shi, Jiazheng

    2008-03-01

    The purpose of this study was to develop and evaluate computer-extracted features for characterizing mammographic mass margins according to BI-RADS spiculated and circumscribed categories. The mass was automatically segmented using an active contour model. A spiculation measure for a pixel on the mass boundary was defined by using the angular difference between the image gradient vector and the normal to the mass, averaged over pixels in a spiculation search region. For the circumscribed margin feature, the angular difference between the principal eigenvector of the Hessian matrix and the normal to the mass was estimated in a band of pixels centered at each point on the boundary, and the feature was extracted from the resulting profile along the boundary. Three MQSA radiologists provided BI-RADS margin ratings for a data set of 198 regions of interest containing breast masses. The features were evaluated with respect to the individual radiologists' characterization using receiver operating characteristic (ROC) analysis, as well as with respect to that from the majority rule, in which a mass was labeled as spiculated (circumscribed) if it was characterized as such by 2 or 3 radiologists, and non-spiculated (non-circumscribed) otherwise. We also investigated the performance of the features for consensus masses, defined as those labeled as spiculated (circumscribed) or nonspiculated (non-circumscribed) by all three radiologists. When masses were labeled according to radiologists R1, R2, and R3 individually, the spiculation feature had an area A z under the ROC curve of 0.90+/-0.04, 0.90+/-0.03, 0.88+/-0.03, respectively, while the circumscribed margin feature had an A z value of 0.77+/-0.04, 0.74+/-0.04, and 0.80+/-0.03, respectively. When masses were labeled according to the majority rule, the A z values for the spiculation and the circumscribed margin features were 0.92+/-0.03 and 0.80+/-+/-0.03, respectively. When only the consensus masses were considered, the A z

  11. Extraction of spatial features in hyperspectral images based on the analysis of differential attribute profiles

    NASA Astrophysics Data System (ADS)

    Falco, Nicola; Benediktsson, Jon A.; Bruzzone, Lorenzo

    2013-10-01

    The new generation of hyperspectral sensors can provide images with a high spectral and spatial resolution. Recent improvements in mathematical morphology have developed new techniques such as the Attribute Profiles (APs) and the Extended Attribute Profiles (EAPs) that can effectively model the spatial information in remote sensing images. The main drawbacks of these techniques is the selection of the optimal range of values related to the family of criteria adopted to each filter step, and the high dimensionality of the profiles, which results in a very large number of features and therefore provoking the Hughes phenomenon. In this work, we focus on addressing the dimensionality issue, which leads to an highly intrinsic information redundancy, proposing a novel strategy for extracting spatial information from hyperspectral images based on the analysis of the Differential Attribute Profiles (DAPs). A DAP is generated by computing the derivative of the AP; it shows at each level the residual between two adjacent levels of the AP. By analyzing the multilevel behavior of the DAP, it is possible to extract geometrical features corresponding to the structures within the scene at different scales. Our proposed approach consists of two steps: 1) a homogeneity measurement is used to identify the level L in which a given pixel belongs to a region with a physical meaning; 2) the geometrical information of the extracted regions is fused into a single map considering their level L previously identified. The process is repeated for different attributes building a reduced EAP, whose dimensionality is much lower with respect to the original EAP ones. Experiments carried out on the hyperspectral data set of Pavia University area show the effectiveness of the proposed method in extracting spatial features related to the physical structures presented in the scene, achieving higher classification accuracy with respect to the ones reported in the state-of-the-art literature

  12. Fault feature extraction and enhancement of rolling element bearing in varying speed condition

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Zhang, W.; Qin, Z. Y.; Chu, F. L.

    2016-08-01

    In engineering applications, the variability of load usually varies the shaft speed, which further degrades the efficacy of the diagnostic method based on the hypothesis of constant speed analysis. Therefore, the investigation of the diagnostic method suitable for the varying speed condition is significant for the bearing fault diagnosis. In this instance, a novel fault feature extraction and enhancement procedure was proposed by the combination of the iterative envelope analysis and a low pass filtering operation in this paper. At first, based on the analytical model of the collected vibration signal, the envelope signal was theoretically calculated and the iterative envelope analysis was improved for the varying speed condition. Then, a feature enhancement procedure was performed by applying a low pass filter on the temporal envelope obtained by the iterative envelope analysis. Finally, the temporal envelope signal was transformed to the angular domain by the computed order tracking and the fault feature was extracted on the squared envelope spectrum. Simulations and experiments were used to validate the efficacy of the theoretical analysis and proposed procedure. It is shown that the computed order tracking method is recommended to be applied on the envelope of the signal in order to avoid the energy spreading and amplitude distortion. Compared with the feature enhancement method performed by the fast kurtogram and corresponding optimal band pass filtering, the proposed method can efficiently extract the fault character in the varying speed condition with less amplitude attenuation. Furthermore, do not involve the center frequency estimation, the proposed method is more concise for engineering applications.

  13. Feature extraction techniques using multivariate analysis for identification of lung cancer volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Thriumani, Reena; Zakaria, Ammar; Hashim, Yumi Zuhanis Has-Yun; Helmy, Khaled Mohamed; Omar, Mohammad Iqbal; Jeffree, Amanina; Adom, Abdul Hamid; Shakaff, Ali Yeon Md; Kamarudin, Latifah Munirah

    2017-03-01

    In this experiment, three different cell cultures (A549, WI38VA13 and MCF7) and blank medium (without cells) as a control were used. The electronic nose (E-Nose) was used to sniff the headspace of cultured cells and the data were recorded. After data pre-processing, two different features were extracted by taking into consideration of both steady state and the transient information. The extracted data are then being processed by multivariate analysis, Linear Discriminant Analysis (LDA) to provide visualization of the clustering vector information in multi-sensor space. The Probabilistic Neural Network (PNN) classifier was used to test the performance of the E-Nose on determining the volatile organic compounds (VOCs) of lung cancer cell line. The LDA data projection was able to differentiate between the lung cancer cell samples and other samples (breast cancer, normal cell and blank medium) effectively. The features extracted from the steady state response reached 100% of classification rate while the transient response with the aid of LDA dimension reduction methods produced 100% classification performance using PNN classifier with a spread value of 0.1. The results also show that E-Nose application is a promising technique to be applied to real patients in further work and the aid of Multivariate Analysis; it is able to be the alternative to the current lung cancer diagnostic methods.

  14. Extracting energetically dominant flow features in a complicated fish wake using singular-value decomposition

    NASA Astrophysics Data System (ADS)

    Ting, Shang-Chieh; Yang, Jing-Tang

    2009-04-01

    We developed a method to extract the energetically dominant flow features in a complicated fish wake according to an energetic point of view, and applied singular-value decomposition (SVD) to two-dimensional instantaneous fluid velocity, vorticity and λ2 (vortex-detector) data. We demonstrate the effectiveness and merits of the use of SVD through an example regarding the wake of a fish executing a fast-start turn. The energy imparted into the water by a swimming fish is captured and portrayed through SVD. The analysis and interpretation of complicated data for the fish wake are greatly improved, and thus help to characterize more accurately a complicated fish wake. The velocity vectors and Galilean invariants (i.e., vorticity and λ2) resulting from SVD extraction are significantly helpful in recognizing the energetically dominant large-scale flow features. To obtain successful SVD extractions, we propose useful criteria based on the Froude propulsion efficiency, which is biologically and physically related. We also introduce a novel and useful method to deduce the topology of dominant flow motions in an instantaneous fish flow field, which is based on combined use of the topological critical-point theory and SVD. The concept and approach proposed in this work are useful and adaptable in biomimetic and biomechanical research concerning the fluid dynamics of a self-propelled body.

  15. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  16. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  17. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    PubMed Central

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  18. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  19. Solid waste bin level detection using gray level co-occurrence matrix feature extraction approach.

    PubMed

    Arebey, Maher; Hannan, M A; Begum, R A; Basri, Hassan

    2012-08-15

    This paper presents solid waste bin level detection and classification using gray level co-occurrence matrix (GLCM) feature extraction methods. GLCM parameters, such as displacement, d, quantization, G, and the number of textural features, are investigated to determine the best parameter values of the bin images. The parameter values and number of texture features are used to form the GLCM database. The most appropriate features collected from the GLCM are then used as inputs to the multi-layer perceptron (MLP) and the K-nearest neighbor (KNN) classifiers for bin image classification and grading. The classification and grading performance for DB1, DB2 and DB3 features were selected with both MLP and KNN classifiers. The results demonstrated that the KNN classifier, at KNN = 3, d = 1 and maximum G values, performs better than using the MLP classifier with the same database. Based on the results, this method has the potential to be used in solid waste bin level classification and grading to provide a robust solution for solid waste bin level detection, monitoring and management.

  20. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  1. Detecting abnormality in optic nerve head images using a feature extraction analysis.

    PubMed

    Zhu, Haogang; Poostchi, Ali; Vernon, Stephen A; Crabb, David P

    2014-07-01

    Imaging and evaluation of the optic nerve head (ONH) plays an essential part in the detection and clinical management of glaucoma. The morphological characteristics of ONHs vary greatly from person to person and this variability means it is difficult to quantify them in a standardized way. We developed and evaluated a feature extraction approach using shift-invariant wavelet packet and kernel principal component analysis to quantify the shape features in ONH images acquired by scanning laser ophthalmoscopy (Heidelberg Retina Tomograph [HRT]). The methods were developed and tested on 1996 eyes from three different clinical centers. A shape abnormality score (SAS) was developed from extracted features using a Gaussian process to identify glaucomatous abnormality. SAS can be used as a diagnostic index to quantify the overall likelihood of ONH abnormality. Maps showing areas of likely abnormality within the ONH were also derived. Diagnostic performance of the technique, as estimated by ROC analysis, was significantly better than the classification tools currently used in the HRT software - the technique offers the additional advantage of working with all images and is fully automated.

  2. Lumbar Ultrasound Image Feature Extraction and Classification with Support Vector Machine.

    PubMed

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2015-10-01

    Needle entry site localization remains a challenge for procedures that involve lumbar puncture, for example, epidural anesthesia. To solve the problem, we have developed an image classification algorithm that can automatically identify the bone/interspinous region for ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. The proposed algorithm consists of feature extraction, feature selection and machine learning procedures. A set of features, including matching values, positions and the appearance of black pixels within pre-defined windows along the midline, were extracted from the ultrasound images using template matching and midline detection methods. A support vector machine was then used to classify the bone images and interspinous images. The support vector machine model was trained with 1,040 images from 26 pregnant subjects and tested on 800 images from a separate set of 20 pregnant patients. A success rate of 95.0% on training set and 93.2% on test set was achieved with the proposed method. The trained support vector machine model was further tested on 46 off-line collected videos, and successfully identified the proper needle insertion site (interspinous region) in 45 of the cases. Therefore, the proposed method is able to process the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work of identifying the needle entry site.

  3. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  4. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    NASA Astrophysics Data System (ADS)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  5. Consistent Feature Extraction From Vector Fields: Combinatorial Representations and Analysis Under Local Reference Frames

    SciTech Connect

    Bhatia, Harsh

    2015-05-01

    This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty

  6. Robo-Psychophysics: Extracting Behaviorally Relevant Features from the Output of Sensors on a Prosthetic Finger.

    PubMed

    Delhaye, Benoit P; Schluter, Erik W; Bensmaia, Sliman J

    2016-01-01

    Efforts are underway to restore sensorimotor function in amputees and tetraplegic patients using anthropomorphic robotic hands. For this approach to be clinically viable, sensory signals from the hand must be relayed back to the patient. To convey tactile feedback necessary for object manipulation, behaviorally relevant information must be extracted in real time from the output of sensors on the prosthesis. In the present study, we recorded the sensor output from a state-of-the-art bionic finger during the presentation of different tactile stimuli, including punctate indentations and scanned textures. Furthermore, the parameters of stimulus delivery (location, speed, direction, indentation depth, and surface texture) were systematically varied. We developed simple decoders to extract behaviorally relevant variables from the sensor output and assessed the degree to which these algorithms could reliably extract these different types of sensory information across different conditions of stimulus delivery. We then compared the performance of the decoders to that of humans in analogous psychophysical experiments. We show that straightforward decoders can extract behaviorally relevant features accurately from the sensor output and most of them outperform humans.

  7. Ion acoustic solitary waves and double layers in a plasma with two temperature electrons featuring Tsallis distribution

    SciTech Connect

    Shalini, Saini, N. S.

    2014-10-15

    The propagation properties of large amplitude ion acoustic solitary waves (IASWs) are studied in a plasma containing cold fluid ions and multi-temperature electrons (cool and hot electrons) with nonextensive distribution. Employing Sagdeev pseudopotential method, an energy balance equation has been derived and from the expression for Sagdeev potential function, ion acoustic solitary waves and double layers are investigated numerically. The Mach number (lower and upper limits) for the existence of solitary structures is determined. Positive as well as negative polarity solitary structures are observed. Further, conditions for the existence of ion acoustic double layers (IADLs) are also determined numerically in the form of the critical values of q{sub c}, f and the Mach number (M). It is observed that the nonextensivity of electrons (via q{sub c,h}), concentration of electrons (via f) and temperature ratio of cold to hot electrons (via β) significantly influence the characteristics of ion acoustic solitary waves as well as double layers.

  8. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  9. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and

  10. System and method for investigating sub-surface features of a rock formation with acoustic sources generating conical broadcast signals

    DOEpatents

    Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt; Johnson, Paul A.; Guyer, Robert; Ten Cate, James A.; Le Bas, Pierre -Yves; Larmat, Carene S.

    2015-08-18

    A method of interrogating a formation includes generating a conical acoustic signal, at a first frequency--a second conical acoustic signal at a second frequency each in the between approximately 500 Hz and 500 kHz such that the signals intersect in a desired intersection volume outside the borehole. The method further includes receiving, a difference signal returning to the borehole resulting from a non-linear mixing of the signals in a mixing zone within the intersection volume.

  11. Fault detection and classification in chemical processes based on neural networks with feature extraction.

    PubMed

    Zhou, Yifeng; Hahn, Juergen; Mannan, M Sam

    2003-10-01

    Feed forward neural networks are investigated here for fault diagnosis in chemical processes, especially batch processes. The use of the neural model prediction error as the residual for fault diagnosis of sensor and component is analyzed. To reduce the training time required for the neural process model, an input feature extraction process for the neural model is implemented. An additional radial basis function neural classifier is developed to isolate faults from the residual generated, and results are presented to demonstrate the satisfactory detection and isolation of faults using this approach.

  12. Image feature extraction with various wavelet functions in a photorefractive joint transform correlator

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Cartwright, C. M.; Ding, M. S.; Gillespie, W. A.

    2000-11-01

    The wavelet transform has found a lot of uses in the field of optics. We present an experimental realization of employing variant wavelet filters into the object space of a photorefractive joint transform correlator to realize image feature extraction. The Haar's wavelet, Roberts gradient and Mexican-hat wavelet are employed in the experiment. Because of its good optical properties, the photorefractive crystal Bi 12SiO 20 is used as the dynamic holographic medium in the Fourier plane. Both scene and reference have been detour-phase encoded in a liquid crystal television in the input plane. Computer simulations, experimental results and analysis are presented.

  13. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  14. Signal feature extraction by multi-scale PCA and its application to respiratory sound classification.

    PubMed

    Xie, Shengkun; Jin, Feng; Krishnan, Sridhar; Sattar, Farook

    2012-07-01

    Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds. Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused in multi-scale analysis. This paper proposes a new signal classification scheme for various types of RS based on multi-scale principal component analysis as a signal enhancement and feature extraction method to capture major variability of Fourier power spectra of the signal. Since we classify RS signals in a high dimensional feature subspace, a new classification method, called empirical classification, is developed for further signal dimension reduction in the classification step and has been shown to be more robust and outperform other simple classifiers. An overall accuracy of 98.34% for the classification of 689 real RS recording segments shows the promising performance of the presented method.

  15. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  16. Feature extraction and band selection methods for hyperspectral imagery applied for identifying defects

    NASA Astrophysics Data System (ADS)

    Cheng, Xuemei; Yang, Tao; Chen, Yud-Ren; Chen, Xin

    2005-11-01

    An important task in hyperspectral data processing is to reduce the redundancy of the spectral and spatial information without losing any valuable details that are needed for the subsequent detection, discrimination and classification processes. Band selection and combination not only serves as the first step of hyperspectral data processing that leads to a significant decrease in computational complexity in the successive procedures, but also a research tool for determining optimal spectra requirements for different online applications. In order to uniquely characterize the materials of interest, band selection criteria for optimal band was defined. An integrated PCA and Fisher linear discriminant (FLD) method has been developed based on the criteria that used for hyperspectral feature band selection and combination. This method has been compared with other feature extraction and selection methods when applied to detect apple defects, and the performance of each method was evaluated and compared based on the detection results.

  17. A fuzzy rule base system for object-based feature extraction and classification

    NASA Astrophysics Data System (ADS)

    Jin, Xiaoying; Paswaters, Scott

    2007-04-01

    In this paper, we present a fuzzy rule base system for object-based feature extraction and classification on remote sensing imagery. First, the object primitives are generated from the segmentation step. Object primitives are defined as individual regions with a set of attributes computed on the regions. The attributes computed include spectral, texture and shape measurements. Crisp rules are very intuitive to the users. They are usually represented as "GT (greater than)", "LT (less than)" and "IB (In Between)" with numerical values. The features can be manually generated by querying on the attributes using these crisp rules and monitoring the resulting selected object primitives. However, the attributes of different features are usually overlapping. The information is inexact and not suitable for traditional digital on/off decisions. Here a fuzzy rule base system is built to better model the uncertainty inherent in the data and vague human knowledge. Rather than representing attributes in linguistic terms like "Small", "Medium", "Large", we proposed a new method for automatic fuzzification of the traditional crisp concepts "GT", "LT" and "IB". Two sets of membership functions are defined to model those concepts. One is based on the piecewise linear functions, the other is based on S-type membership functions. A novel concept "fuzzy tolerance" is proposed to control the degree of fuzziness of each rule. The experimental results on classification and extracting features such as water, buildings, trees, fields and urban areas have shown that this newly designed fuzzy rule base system is intuitive and allows users to easily generate fuzzy rules.

  18. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  19. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  20. Learning object location predictors with boosting and grammar-guided feature extraction

    SciTech Connect

    Eads, Damian Ryan; Rosten, Edward; Helmbold, David

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  1. Surface roughness extraction based on Markov random field model in wavelet feature domain

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Lei, Li-qiao

    2014-12-01

    Based on the computer texture analysis method, a new noncontact surface roughness measurement technique is proposed. The method is inspired by the nonredundant directional selectivity and highly discriminative nature of the wavelet representation and the capability of the Markov random field (MRF) model to capture statistical regularities. Surface roughness information contained in the texture features may be extracted based on an MRF stochastic model of textures in the wavelet feature domain. The model captures significant intrascale and interscale statistical dependencies between wavelet coefficients. To investigate the relationship between the texture features and surface roughness Ra, a simple research setup, which consists of a charge-coupled diode camera without a lens and a diode laser, was established, and the laser speckle texture patterns are acquired from some standard grinding surfaces. The research results have illustrated that surface roughness Ra has a good monotonic relationship with the texture features of the laser speckle pattern. If this measuring system is calibrated with the surface standard samples roughness beforehand, the surface roughness actual value Ra can be deduced in the case of the same material surfaces ground at the same manufacture conditions.

  2. Lamb wave feature extraction using discrete wavelet transformation and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Ghodsi, Mojtaba; Ziaiefar, Hamidreza; Amiryan, Milad; Honarvar, Farhang; Hojjat, Yousef; Mahmoudi, Mehdi; Al-Yahmadi, Amur; Bahadur, Issam

    2016-04-01

    In this research, a new method is presented for eliciting the proper features for recognizing and classifying the kinds of the defects by guided ultrasonic waves. After applying suitable preprocessing, the suggested method extracts the base frequency band from the received signals by discrete wavelet transform and discrete Fourier transform. This frequency band can be used as a distinctive feature of ultrasonic signals in different defects. Principal Component Analysis with improving this feature and decreasing extra data managed to improve classification. In this study, ultrasonic test with A0 mode lamb wave is used and is appropriated to reduce the difficulties around the problem. The defects under analysis included corrosion, crack and local thickness reduction. The last defect is caused by electro discharge machining (EDM). The results of the classification by optimized Neural Network depicts that the presented method can differentiate different defects with 95% precision and thus, it is a strong and efficient method. Moreover, comparing the elicited features for corrosion and local thickness reduction and also the results of the two's classification clarifies that modeling the corrosion procedure by local thickness reduction which was previously common, is not an appropriate method and the signals received from the two defects are different from each other.

  3. Extraction of medically interpretable features for classification of malignancy in breast thermography.

    PubMed

    Madhu, Himanshu; Kakileti, Siva Teja; Venkataramani, Krithika; Jabbireddy, Susmija

    2016-08-01

    Thermography, with high-resolution cameras, is being re-investigated as a possible breast cancer screening imaging modality, as it does not have the harmful radiation effects of mammography. This paper focuses on automatic extraction of medically interpretable non-vascular thermal features. We design these features to differentiate malignancy from different non-malignancy conditions, including hormone sensitive tissues and certain benign conditions, which have an increased thermal response. These features increase the specificity for breast cancer screening, which had been a long known problem in thermographic screening, while retaining high sensitivity. These features are also agnostic to different cameras and resolutions (up to an extent). On a dataset of around 78 subjects with cancer and 187 subjects without cancer, that have some benign diseases and conditions with thermal responses, we are able to get around 99% specificity while having 100% sensitivity. This indicates a potential break-through in thermographic screening for breast cancer. This shows promise for undertaking a comparison to mammography with larger numbers of subjects with more data variations.

  4. Extraction of pulse repetition intervals from sperm whale click trains for ocean acoustic data mining.

    PubMed

    Zaugg, Serge; van der Schaar, Mike; Houégnigan, Ludwig; André, Michel

    2013-02-01

    The analysis of acoustic data from the ocean is a valuable tool to study free ranging cetaceans and anthropogenic noise. Due to the typically large volume of acquired data, there is a demand for automated analysis techniques. Many cetaceans produce acoustic pulses (echolocation clicks) with a pulse repetition interval (PRI) remaining nearly constant over several pulses. Analyzing these pulse trains is challenging because they are often interleaved. This article presents an algorithm that estimates a pulse's PRI with respect to neighboring pulses. It includes a deinterleaving step that operates via a spectral dissimilarity metric. The sperm whale (SW) produces trains with PRIs between 0.5 and 2 s. As a validation, the algorithm was used for the PRI-based identification of SW click trains with data from the NEMO-ONDE observatory that contained other pulsed sounds, mainly from ship propellers. Separation of files containing SW clicks with a medium and high signal to noise ratio from files containing other pulsed sounds gave an area under the receiver operating characteristic curve value of 0.96. This study demonstrates that PRI can be used for the automated identification of SW clicks and that deinterleaving via spectral dissimilarity contributes to algorithm performance.

  5. Extraction of Stoneley and acoustic Rayleigh waves from ambient noise on ocean bottom observations

    NASA Astrophysics Data System (ADS)

    Tonegawa, T.; Fukao, Y.; Takahashi, T.; Obana, K.; Kodaira, S.; Kaneda, Y.

    2013-12-01

    In the interferometry, the wavefield propagating between two positions can be retrieved by correlating ambient noise recorded on the two positions. This approach is useful for applying to various kinds of wavefield, such as ultrasonic, acoustic (ocean acoustic), and also seismology. Off the Kii Peninsula, Japan, more than 150 short-period (4.5 Hz) seismometers, in which hydrophone is also cosited, had been deployed for ~2 months on 2012 by Japan Agency for Marine-Earth Science and Technology (JAMSTEC) as a part of 'Research concerning Interaction Between the Tokai, Tonankai and Nankai Earthquakes' funded by Ministry of Education, Culture, Sports, Science and Technology, Japan. In this study, correlating ambient noise recorded on the sensors and hydrophones, we attempt to investigate characteristics of wavefield relative to the ocean, sediment, and solid-fluid boundary. The observation period is from Sep. 2012 to Dec. 2012. Station spacing is around 5 km. For 5 lines off the Kii Peninsula, the 30-40 seismometers are distributed at each line. Sampling interval is 200 Hz for both seismometer and hydrophone. The vertical component is just used in this study for correlation analysis. The instruments are located at 100-4800 m in water depth. In the processing for the both records, we applied a bandpass filter of 1-3 Hz, replaced the amplitude to zero if it exceeds a value that was set in this study, and took one-bit normalization. We calculated cross-correlation function (CCF) by using continuous records with a time length of 600 s, stacked the CCFs over the whole observation period. As a result of the analysis for hydrophone, a strong peak can be seen in the CCF for pairs of stations where the separation distance is ~5 km. Although the peak emerges in the CCFs for the separation distance up to 10 km, it disappears in the case that two stations are greater than 15 km separated. As a next approach, along a line off the Kii Peninsula, we aligned CCFs for two stations with

  6. Investigation of Methods for Extracting Features Related to Motor Imagery and Resting States in EEG-Based BCI System

    NASA Astrophysics Data System (ADS)

    Susila, I. Putu; Kanoh, Shin'ichiro; Miyamoto, Ko-Ichiro; Yoshinobu, Tatsuo

    Methods for extracting features of motor imagery from 1-channel bipolar EEG were evaluated. The EEG power spectrums which were used as feature vectors were calculated with filter bank, FFT and AR model, and were then classified by linear discriminant analysis (LDA) to discriminate motor imagery and resting states. It was shown that the extraction method using AR model gave the best result with the average true positive rate of 83% (σ = 7%). Furthermore, when principal component analysis (PCA) was applied to the feature vectors, the dimension of the feature vectors could be reduced without decreasing accuracy of discrimination.

  7. Multiple feature extraction and classification of electroencephalograph signal for Alzheimers' with spectrum and bispectrum

    NASA Astrophysics Data System (ADS)

    Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile

    2015-01-01

    In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of

  8. A feature extraction method based on information theory for fault diagnosis of reciprocating machinery.

    PubMed

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to.

  9. A Feature Extraction Method for Vibration Signal of Bearing Incipient Degradation

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli; Guo, Liang; Li, Dan; Wen, Juan

    2016-06-01

    Detection of incipient degradation demands extracting sensitive features accurately when signal-to-noise ratio (SNR) is very poor, which appears in most industrial environments. Vibration signals of rolling bearings are widely used for bearing fault diagnosis. In this paper, we propose a feature extraction method that combines Blind Source Separation (BSS) and Spectral Kurtosis (SK) to separate independent noise sources. Normal, and incipient fault signals from vibration tests of rolling bearings are processed. We studied 16 groups of vibration signals (which all display an increase in kurtosis) of incipient degradation after they are processed by a BSS filter. Compared with conventional kurtosis, theoretical studies of SK trends show that the SK levels vary with frequencies and some experimental studies show that SK trends of measured vibration signals of bearings vary with the amount and level of impulses in both vibration and noise signals due to bearing faults. It is found that the peak values of SK increase when vibration signals of incipient faults are processed by a BSS filter. This pre-processing by a BSS filter makes SK more sensitive to impulses caused by performance degradation of bearings.

  10. A Joint Time-Frequency and Matrix Decomposition Feature Extraction Methodology for Pathological Voice Classification

    NASA Astrophysics Data System (ADS)

    Ghoraani, Behnaz; Krishnan, Sridhar

    2009-12-01

    The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.

  11. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    PubMed Central

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  12. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli.

    PubMed

    Kumar, Neeraj; Mutha, Pratik K

    2016-03-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function-perception-are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the "natural" sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.

  13. Memory-efficient architecture for hysteresis thresholding and object feature extraction.

    PubMed

    Najjar, Mayssaa A; Karlapudi, Swetha; Bayoumi, Magdy A

    2011-12-01

    Hysteresis thresholding is a method that offers enhanced object detection. Due to its recursive nature, it is time consuming and requires a lot of memory resources. This makes it avoided in streaming processors with limited memory. We propose two versions of a memory-efficient and fast architecture for hysteresis thresholding: a high-accuracy pixel-based architecture and a faster block-based one at the expense of some loss in the accuracy. Both designs couple thresholding with connected component analysis and feature extraction in a single pass over the image. Unlike queue-based techniques, the proposed scheme treats candidate pixels almost as foreground until objects complete; a decision is then made to keep or discard these pixels. This allows processing on the fly, thus avoiding additional passes for handling candidate pixels and extracting object features. Moreover, labels are reused so only one row of compact labels is buffered. Both architectures are implemented in MATLAB and VHDL. Simulation results on a set of real and synthetic images show that the execution speed can attain an average increase up to 24× for the pixel-based and 52× for the block-based when compared to state-of-the-art techniques. The memory requirements are also drastically reduced by about 99%.

  14. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    PubMed Central

    Iannelli, Gianni Cristian; Lisini, Gianni; Dell'Acqua, Fabio; Feitosa, Raul Queiroz; da Costa, Gilson Alexandre Ostwald Pedro; Gamba, Paolo

    2014-01-01

    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. PMID:25271564

  15. Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction

    NASA Astrophysics Data System (ADS)

    Ding, Xiaoxi; He, Qingbo

    2016-12-01

    In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.

  16. Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors.

    PubMed

    Peng, Xi; Zhao, Bo; Yan, Rui; Tang, Huajin; Yi, Zhang

    2016-03-18

    Address event representation (AER) image sensors represent the visual information as a sequence of events that denotes the luminance changes of the scene. In this paper, we introduce a feature extraction method for AER image sensors based on the probability theory, namely, bag of events (BOE). The proposed approach represents each object as the joint probability distribution of the concurrent events, and each event corresponds to a unique activated pixel of the AER sensor. The advantages of BOE include: 1) it is a statistical learning method and has a good interpretability in mathematics; 2) BOE can significantly reduce the effort to tune parameters for different data sets, because it only has one hyperparameter and is robust to the value of the parameter; 3) BOE is an online learning algorithm, which does not require the training data to be collected in advance; 4) BOE can achieve competitive results in real time for feature extraction (>275 frames/s and >120,000 events/s); and 5) the implementation complexity of BOE only involves some basic operations, e.g., addition and multiplication. This guarantees the hardware friendliness of our method. The experimental results on three popular AER databases (i.e., MNIST-dynamic vision sensor, Poker Card, and Posture) show that our method is remarkably faster than two recently proposed AER categorization systems while preserving a good classification accuracy.

  17. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  18. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  19. Fault feature extraction of rolling bearing based on an improved cyclical spectrum density method

    NASA Astrophysics Data System (ADS)

    Li, Min; Yang, Jianhong; Wang, Xiaojing

    2015-11-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method, experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  20. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  1. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    SciTech Connect

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  2. Adaboost face detector based on Joint Integral Histogram and Genetic Algorithms for feature extraction process.

    PubMed

    Jammoussi, Ameni Yangui; Ghribi, Sameh Fakhfakh; Masmoudi, Dorra Sellami

    2014-01-01

    Recently, many classes of objects can be efficiently detected by the way of machine learning techniques. In practice, boosting techniques are among the most widely used machine learning for various reasons. This is mainly due to low false positive rate of the cascade structure offering the possibility to be trained by different classes of object. However, it is especially used for face detection since it is the most popular sub-problem within object detection. The challenges of Adaboost based face detector include the selection of the most relevant features from a large feature set which are considered as weak classifiers. In many scenarios, however, selection of features based on lowering classification errors leads to computation complexity and excess of memory use. In this work, we propose a new method to train an effective detector by discarding redundant weak classifiers while achieving the pre-determined learning objective. To achieve this, on the one hand, we modify AdaBoost training so that the feature selection process is not based any more on the weak learner's training error. This is by incorporating the Genetic Algorithm (GA) on the training process. On the other hand, we make use of the Joint Integral Histogram in order to extract more powerful features. Experimental performance on human faces show that our proposed method requires smaller number of weak classifiers than the conventional learning algorithm, resulting in higher learning and faster classification rates. So, our method outperforms significantly state-of-the-art cascade methods in terms of detection rate and false positive rate and especially in reducing the number of weak classifiers per stage.

  3. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    PubMed

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  4. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  5. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  6. [Estimation of age-related features of acoustic density and biometric relations of lens based on combined ultrasound scanning].

    PubMed

    Avetisov, K S; Markosian, A G

    2013-01-01

    Results of combined ultrasound scanning for estimation of acoustic lens density and biometric relations of lens and other eye structures are presented. A group of 124 patients (189 eyes) was studied; they were subdivided depending on age and length of anteroposterior axis of the eye. Examination algorithm was developed that allows selective estimation of acoustic density of different lens zones and biometric measurements including volumetric. Age-related increase of acoustic density of different lens zones was revealed that indirectly shows method efficiency. Biometric studies showed almost concurring volumetric lens measurements in "normal" and "short" eyes in spite of significantly thicker central zone of the latter. Significantly lower correlation between anterior chamber volume and width of its angle was revealed in "short" eyes and "normal" and "long" eyes (correlation coefficients 0.37, 0.68 and 0.63 respectively).

  7. A Distinguishing Arterial Pulse Waves Approach by Using Image Processing and Feature Extraction Technique.

    PubMed

    Chen, Hsing-Chung; Kuo, Shyi-Shiun; Sun, Shen-Ching; Chang, Chia-Hui

    2016-10-01

    Traditional Chinese Medicine (TCM) is based on five main types of diagnoses methods consisting of inspection, auscultation, olfaction, inquiry, and palpation. The most important one is palpation also called pulse diagnosis which is to measure wrist artery pulse by doctor's fingers for detecting patient's health state. In this paper, it is carried out by using a specialized pulse measuring instrument to classify one's pulse type. The measured pulse waves (MPWs) were segmented into the arterial pulse wave curve (APWC) by image proposing method. The slopes and periods among four specific points on the APWC were taken to be the pulse features. Three algorithms are proposed in this paper, which could extract these features from the APWCs and compared their differences between each of them to the average feature matrix, individually. These results show that the method proposed in this study is superior and more accurate than the previous studies. The proposed method could significantly save doctors a large amount of time, increase accuracy and decrease data volume.

  8. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 1: time domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    The paper presents an application of the gamma-absorption method to study a gas-liquid two-phase flow in a horizontal pipeline. In the tests on laboratory installation two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals have been used. The experimental set-up allows recording of stochastic signals, which describe instantaneous content of the stream in the particular cross-section of the flow mixture. The analyses of these signals by statistical methods allow to determine the mean velocity of the gas phase. Meanwhile, the selected features of signals provided by the absorption set, can be applied to recognition of the structure of the flow. In this work such three structures of air-water flow as: plug, bubble, and transitional plug - bubble one were considered. The recorded raw signals were analyzed in time domain and several features were extracted. It was found that following features of signals as the mean, standard deviation, root mean square (RMS), variance and 4th moment are most useful to recognize the structure of the flow.

  9. REGARDING THE LINE-OF-SIGHT BARYONIC ACOUSTIC FEATURE IN THE SLOAN DIGITAL SKY SURVEY AND BARYON OSCILLATION SPECTROSCOPIC SURVEY LUMINOUS RED GALAXY SAMPLES

    SciTech Connect

    Kazin, Eyal A.; Blanton, Michael R.; Scoccimarro, Roman; McBride, Cameron K.; Berlind, Andreas A.

    2010-08-20

    We analyze the line-of-sight baryonic acoustic feature in the two-point correlation function {xi} of the Sloan Digital Sky Survey luminous red galaxy (LRG) sample (0.16 < z < 0.47). By defining a narrow line-of-sight region, r{sub p} < 5.5 h {sup -1} Mpc, where r{sub p} is the transverse separation component, we measure a strong excess of clustering at {approx}110 h {sup -1} Mpc, as previously reported in the literature. We also test these results in an alternative coordinate system, by defining the line of sight as {theta} < 3{sup 0}, where {theta} is the opening angle. This clustering excess appears much stronger than the feature in the better-measured monopole. A fiducial {Lambda}CDM nonlinear model in redshift space predicts a much weaker signature. We use realistic mock catalogs to model the expected signal and noise. We find that the line-of-sight measurements can be explained well by our mocks as well as by a featureless {xi} = 0. We conclude that there is no convincing evidence that the strong clustering measurement is the line-of-sight baryonic acoustic feature. We also evaluate how detectable such a signal would be in the upcoming Baryon Oscillation Spectroscopic Survey (BOSS) LRG volume. Mock LRG catalogs (z < 0.6) suggest that (1) the narrow line-of-sight cylinder and cone defined above probably will not reveal a detectable acoustic feature in BOSS; (2) a clustering measurement as high as that in the current sample can be ruled out (or confirmed) at a high confidence level using a BOSS-sized data set; (3) an analysis with wider angular cuts, which provide better signal-to-noise ratios, can nevertheless be used to compare line-of-sight and transverse distances, and thereby constrain the expansion rate H(z) and diameter distance D{sub A}(z).

  10. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  11. Extraction of Airport Features from High Resolution Satellite Imagery for Design and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, Chris; Qiu, You-Liang; Jensen, John R.; Schill, Steven R.; Floyd, Mike

    2001-01-01

    The LPA Group, consisting of 17 offices located throughout the eastern and central United States is an architectural, engineering and planning firm specializing in the development of Airports, Roads and Bridges. The primary focus of this ARC project is concerned with assisting their aviation specialists who work in the areas of Airport Planning, Airfield Design, Landside Design, Terminal Building Planning and design, and various other construction services. The LPA Group wanted to test the utility of high-resolution commercial satellite imagery for the purpose of extracting airport elevation features in the glide path areas surrounding the Columbia Metropolitan Airport. By incorporating remote sensing techniques into their airport planning process, LPA wanted to investigate whether or not it is possible to save time and money while achieving the equivalent accuracy as traditional planning methods. The Affiliate Research Center (ARC) at the University of South Carolina investigated the use of remotely sensed imagery for the extraction of feature elevations in the glide path zone. A stereo pair of IKONOS panchromatic satellite images, which has a spatial resolution of 1 x 1 m, was used to determine elevations of aviation obstructions such as buildings, trees, towers and fence-lines. A validation dataset was provided by the LPA Group to assess the accuracy of the measurements derived from the IKONOS imagery. The initial goal of this project was to test the utility of IKONOS imagery in feature extraction using ERDAS Stereo Analyst. This goal was never achieved due to problems with ERDAS software support of the IKONOS sensor model and the unavailability of imperative sensor model information from Space Imaging. The obstacles encountered in this project pertaining to ERDAS Stereo Analyst and IKONOS imagery will be reviewed in more detail later in this report. As a result of the technical difficulties with Stereo Analyst, ERDAS OrthoBASE was used to derive aviation

  12. Protein Function Prediction using Text-based Features extracted from the Biomedical Literature: The CAFA Challenge

    PubMed Central

    2013-01-01

    Background Advances in sequencing technology over the past decade have resulted in an abundance of sequenced proteins whose function is yet unknown. As such, computational systems that can automatically predict and annotate protein function are in demand. Most computational systems use features derived from protein sequence or protein structure to predict function. In an earlier work, we demonstrated the utility of biomedical literature as a source of text features for predicting protein subcellular location. We have also shown that the combination of text-based and sequence-based prediction improves the performance of location predictors. Following up on this work, for the Critical Assessment of Function Annotations (CAFA) Challenge, we developed a text-based system that aims to predict molecular function and biological process (using Gene Ontology terms) for unannotated proteins. In this paper, we present the preliminary work and evaluation that we performed for our system, as part of the CAFA challenge. Results We have developed a preliminary system that represents proteins using text-based features and predicts protein function using a k-nearest neighbour classifier (Text-KNN). We selected text features for our classifier by extracting key terms from biomedical abstracts based on their statistical properties. The system was trained and tested using 5-fold cross-validation over a dataset of 36,536 proteins. System performance was measured using the standard measures of precision, recall, F-measure and overall accuracy. The performance of our system was compared to two baseline classifiers: one that assigns function based solely on the prior distribution of protein function (Base-Prior) and one that assigns function based on sequence similarity (Base-Seq). The overall prediction accuracy of Text-KNN, Base-Prior, and Base-Seq for molecular function classes are 62%, 43%, and 58% while the overall accuracy for biological process classes are 17%, 11%, and 28

  13. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  14. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  15. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut

    NASA Astrophysics Data System (ADS)

    Oesau, Sven; Lafarge, Florent; Alliez, Pierre

    2014-04-01

    We present a method for automatic reconstruction of permanent structures, such as walls, floors and ceilings, given a raw point cloud of an indoor scene. The main idea behind our approach is a graph-cut formulation to solve an inside/outside labeling of a space partitioning. We first partition the space in order to align the reconstructed models with permanent structures. The horizontal structures are located through analysis of the vertical point distribution, while vertical wall structures are detected through feature preserving multi-scale line fitting, followed by clustering in a Hough transform space. The final surface is extracted through a graph-cut formulation that trades faithfulness to measurement data for geometric complexity. A series of experiments show watertight surface meshes reconstructed from point clouds measured on multi-level buildings.

  16. [Study on the method of feature extraction for brain-computer interface using discriminative common vector].

    PubMed

    Wang, Jinjia; Hu, Bei

    2013-02-01

    Discriminative common vector (DCV) is an effective method that was proposed for the small sample size problems of face recognition. There is the same problem in brain-computer interface (BCI). Using directly the linear discriminative analysis (LDA) could result in errors because of the singularity of the within-class matrix of data. In our studies, we used the DCV method from the common vector theory in the within-class scatter matrix of data of all classes, and then applied eigenvalue decomposition to the common vectors to obtain the final projected vectors. Then we used kernel discriminative common vector (KDCV) with different kernel. Three data sets that include BCI Competition I data set, Competition II data set IV, and a data set collected by ourselves were used in the experiments. The experiment results of 93%, 77% and 97% showed that this feature extraction method could be used well in the classification of imagine data in BCI.

  17. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  18. A novel feature extraction methodology for region classification in lidar data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.; Sargent, Garrett C.

    2016-10-01

    LiDAR is a remote sensing method used to produce precise point clouds with millions of geo-spatially located 3D data points. The challenge comes when trying to accurately and efficiently segment and classify objects, especially in instances of occlusion and where objects are in close local proximity. The goal of this paper is to propose a more accurate and efficient way of performing segmentation and extracting features of objects in point clouds. Normal Octree Region Merging (NORM) is a segmentation technique based on surface normal similarities, and it subdivides the object points into clusters. The idea behind the technique of surface normal calculation is that for a given neighborhood around each point, the normal of a plane which best fits that set of points can be considered to be the surface normal at that particular point. Next, an octree-based segmentation approach is applied by dividing the entire scene into eight bins, 2 x 2 x 2 in the X, Y, and Z direction. Then for each of these bins, the variance of all the elevation angles corresponding to the surface normal within that bin is calculated and if the elevation angle falls below a certain threshold, the bin is divided into eight more bins. This process is repeated until the entire scene consists of different sized bins, all containing surface normals with elevation variances below a given threshold. However, the octree-based segmentation process produces obvious over segmentation of most of the objects. In order to correct for this over segmentation, a region merging approach is applied. This region merging approach works much like the automatic seeded region growing technique, which is an already well known technique, with the exception that instead of using height to measure similarity, a histogram signature is used. Each cluster generated from the previous NORM segmentation technique is then run through a Shape-based Eigen Local Feature (SELF) algorithm, where the focus is on calculating normalized

  19. Application of fuzzy logic to feature extraction from images of agricultural material

    NASA Astrophysics Data System (ADS)

    Thompson, Bruce T.

    1999-11-01

    Imaging technology has extended itself from performing gauging on machined parts, to verifying labeling on consumer products, to quality inspection of a variety of man-made and natural materials. Much of this has been made possible by faster computers and algorithms used to extract useful information from the image. In the application of agricultural material, specifically tobacco leaves, the tremendous amount of natural variability in color and texture creates new challenges to image feature extraction. As with many imaging applications, the problem can be expressed as `I see it in the image, how can I get the computer to recognize it?' In this application, the goal is to measure the amount of thick stem pieces in an image of tobacco leaves. By backlighting the leaf, the stems appear dark on a lighter background. The difference in lightness of leaf versus darkness of stem is dependent on the orientation of the leaf and the amount of folding. Because of this, any image thresholding approach must be adaptive. Another factor that allows us to identify the stem from the leaf is shape. The stem is long and narrow, while dark folded leaf is larger and more oblate. These criteria under the image collection limitations create a good application for fuzzy logic. Several generalized classification algorithms, such as fuzzy c-means and fuzzy learning vector quantization, are evaluated and compared. In addition, fuzzy thresholding based on image shape and compactness are applied to this application.

  20. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  1. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    PubMed Central

    Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  2. Extracting drug-drug interactions from literature using a rich feature-based linear kernel approach

    PubMed Central

    Kim, Sun; Yeganova, Lana; Wilbur, W. John

    2015-01-01

    Identifying unknown drug interactions is of great benefit in the early detection of adverse drug reactions. Despite existence of several resources for drug-drug interaction (DDI) information, the wealth of such information is buried in a body of unstructured medical text which is growing exponentially. This calls for developing text mining techniques for identifying DDIs. The state-of-the-art DDI extraction methods use Support Vector Machines (SVMs) with non-linear composite kernels to explore diverse contexts in literature. While computationally less expensive, linear kernel-based systems have not achieved a comparable performance in DDI extraction tasks. In this work, we propose an efficient and scalable system using a linear kernel to identify DDI information. The proposed approach consists of two steps: identifying DDIs and assigning one of four different DDI types to the predicted drug pairs. We demonstrate that when equipped with a rich set of lexical and syntactic features, a linear SVM classifier is able to achieve a competitive performance in detecting DDIs. In addition, the one-against-one strategy proves vital for addressing an imbalance issue in DDI type classification. Applied to the DDIExtraction 2013 corpus, our system achieves an F1 score of 0.670, as compared to 0.651 and 0.609 reported by the top two participating teams in the DDIExtraction 2013 challenge, both based on non-linear kernel methods. PMID:25796456

  3. Feature extraction and classifcation in surface grading application using multivariate statistical projection models

    NASA Astrophysics Data System (ADS)

    Prats-Montalbán, José M.; López, Fernando; Valiente, José M.; Ferrer, Alberto

    2007-01-01

    In this paper we present an innovative way to simultaneously perform feature extraction and classification for the quality control issue of surface grading by applying two well known multivariate statistical projection tools (SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and predictions computed from the extracted projection models. Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to compute accuracy estimates and study the factors effect. The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These results outperform those obtained in a previous work where the soft color texture descriptors in combination with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.

  4. Exact feature extraction using finite rate of innovation principles with an application to image super-resolution.

    PubMed

    Baboulaz, Loïc; Dragotti, Pier Luigi

    2009-02-01

    The accurate registration of multiview images is of central importance in many advanced image processing applications. Image super-resolution, for example, is a typical application where the quality of the super-resolved image is degrading as registration errors increase. Popular registration methods are often based on features extracted from the acquired images. The accuracy of the registration is in this case directly related to the number of extracted features and to the precision at which the features are located: images are best registered when many features are found with a good precision. However, in low-resolution images, only a few features can be extracted and often with a poor precision. By taking a sampling perspective, we propose in this paper new methods for extracting features in low-resolution images in order to develop efficient registration techniques. We consider, in particular, the sampling theory of signals with finite rate of innovation and show that some features of interest for registration can be retrieved perfectly in this framework, thus allowing an exact registration. We also demonstrate through simulations that the sampling model which enables the use of finite rate of innovation principles is well suited for modeling the acquisition of images by a camera. Simulations of image registration and image super-resolution of artificially sampled images are first presented, analyzed and compared to traditional techniques. We finally present favorable experimental results of super-resolution of real images acquired by a digital camera available on the market.

  5. Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Levenson, Richard M.; Rimm, David L.

    2003-05-01

    Recent developments in imaging technology mean that it is now possible to obtain high-resolution histological image data at multiple wavelengths. This allows pathologists to image specimens over a full spectrum, thereby revealing (often subtle) distinctions between different types of tissue. With this type of data, the spectral content of the specimens, combined with quantitative spatial feature characterization may make it possible not only to identify the presence of an abnormality, but also to classify it accurately. However, such are the quantities and complexities of these data, that without new automated techniques to assist in the data analysis, the information contained in the data will remain inaccessible to those who need it. We investigate the application of a recently developed system for the automated analysis of multi-/hyper-spectral satellite image data to the problem of cancer detection from multispectral histopathology image data. The system provides a means for a human expert to provide training data simply by highlighting regions in an image using a computer mouse. Application of these feature extraction techniques to examples of both training and out-of-training-sample data demonstrate that these, as yet unoptimized, techniques already show promise in the discrimination between benign and malignant cells from a variety of samples.

  6. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  7. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  8. Understanding the effects of pre-processing on extracted signal features from gait accelerometry signals.

    PubMed

    Millecamps, Alexandre; Lowry, Kristin A; Brach, Jennifer S; Perera, Subashan; Redfern, Mark S; Sejdić, Ervin

    2015-07-01

    Gait accelerometry is an important approach for gait assessment. Previous contributions have adopted various pre-processing approaches for gait accelerometry signals, but none have thoroughly investigated the effects of such pre-processing operations on the obtained results. Therefore, this paper investigated the influence of pre-processing operations on signal features extracted from gait accelerometry signals. These signals were collected from 35 participants aged over 65years: 14 of them were healthy controls (HC), 10 had Parkinson׳s disease (PD) and 11 had peripheral neuropathy (PN). The participants walked on a treadmill at preferred speed. Signal features in time, frequency and time-frequency domains were computed for both raw and pre-processed signals. The pre-processing stage consisted of applying tilt correction and denoising operations to acquired signals. We first examined the effects of these operations separately, followed by the investigation of their joint effects. Several important observations were made based on the obtained results. First, the denoising operation alone had almost no effects in comparison to the trends observed in the raw data. Second, the tilt correction affected the reported results to a certain degree, which could lead to a better discrimination between groups. Third, the combination of the two pre-processing operations yielded similar trends as the tilt correction alone. These results indicated that while gait accelerometry is a valuable approach for the gait assessment, one has to carefully adopt any pre-processing steps as they alter the observed findings.

  9. GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Jiaxin; Xu, Feiyun; Huang, Kai; Huang, Ren

    2017-09-01

    Given its simplicity of modeling and sensitivity to condition variations, time series model is widely used in feature extraction to realize fault classification and diagnosis. However, nonlinear and nonstationary characteristics common in fault signals of rolling bearing bring challenges to the diagnosis. In this paper, a hybrid model, the combination of a general expression for linear and nonlinear autoregressive (GNAR) model and a generalized autoregressive conditional heteroscedasticity (GARCH) model, (i.e., GNAR-GARCH), is proposed and applied to rolling bearing fault diagnosis. An exact expression of GNAR-GARCH model is given. Maximum likelihood method is used for parameter estimation and modified Akaike Information Criterion is adopted for structure identification of GNAR-GARCH model. The main advantage of this novel model over other models is that the combination makes the model suitable for nonlinear and nonstationary signals. It is verified with statistical tests that contain comparisons among the different time series models. Finally, GNAR-GARCH model is applied to fault diagnosis by modeling mechanical vibration signals including simulation and real data. With the parameters estimated and taken as feature vectors, k-nearest neighbor algorithm is utilized to realize the classification of fault status. The results show that GNAR-GARCH model exhibits higher accuracy and better performance than do other models.

  10. Extracting topological features from dynamical measures in networks of Kuramoto oscillators

    NASA Astrophysics Data System (ADS)

    Prignano, Luce; Díaz-Guilera, Albert

    2012-03-01

    The Kuramoto model for an ensemble of coupled oscillators provides a paradigmatic example of nonequilibrium transitions between an incoherent and a synchronized state. Here we analyze populations of almost identical oscillators in arbitrary interaction networks. Our aim is to extract topological features of the connectivity pattern from purely dynamical measures based on the fact that in a heterogeneous network the global dynamics is not only affected by the distribution of the natural frequencies but also by the location of the different values. In order to perform a quantitative study we focused on a very simple frequency distribution considering that all the frequencies are equal but one, that of the pacemaker node. We then analyze the dynamical behavior of the system at the transition point and slightly above it as well as very far from the critical point, when it is in a highly incoherent state. The gathered topological information ranges from local features, such as the single-node connectivity, to the hierarchical structure of functional clusters and even to the entire adjacency matrix.

  11. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  12. Extraction of time and frequency features from grip force rates during dexterous manipulation.

    PubMed

    Mojtahedi, Keivan; Fu, Qiushi; Santello, Marco

    2015-05-01

    The time course of grip force from object contact to onset of manipulation has been extensively studied to gain insight into the underlying control mechanisms. Of particular interest to the motor neuroscience and clinical communities is the phenomenon of bell-shaped grip force rate (GFR) that has been interpreted as indicative of feedforward force control. However, this feature has not been assessed quantitatively. Furthermore, the time course of grip force may contain additional features that could provide insight into sensorimotor control processes. In this study, we addressed these questions by validating and applying two computational approaches to extract features from GFR in humans: 1) fitting a Gaussian function to GFR and quantifying the goodness of the fit [root-mean-square error, (RMSE)]; and 2) continuous wavelet transform (CWT), where we assessed the correlation of the GFR signal with a Mexican Hat function. Experiment 1 consisted of a classic pseudorandomized presentation of object mass (light or heavy), where grip forces developed to lift a mass heavier than expected are known to exhibit corrective responses. For Experiment 2, we applied our two techniques to analyze grip force exerted for manipulating an inverted T-shaped object whose center of mass was changed across blocks of consecutive trials. For both experiments, subjects were asked to grasp the object at either predetermined or self-selected grasp locations ("constrained" and "unconstrained" task, respectively). Experiment 1 successfully validated the use of RMSE and CWT as they correctly distinguished trials with versus without force corrective responses. RMSE and CWT also revealed that grip force is characterized by more feedback-driven corrections when grasping at self-selected contact points. Future work will examine the application of our analytical approaches to a broader range of tasks, e.g., assessment of recovery of sensorimotor function following clinical intervention, interlimb

  13. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    SciTech Connect

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  14. High-speed imaging, acoustic features, and aeroacoustic computations of jet noise from Strombolian (and Vulcanian) explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Sesterhenn, J.; Scarlato, P.; Stampka, K.; Del Bello, E.; Pena Fernandez, J. J.; Gaudin, D.

    2014-05-01

    High-speed imaging of explosive eruptions at Stromboli (Italy), Fuego (Guatemala), and Yasur (Vanuatu) volcanoes allowed visualization of pressure waves from seconds-long explosions. From the explosion jets, waves radiate with variable geometry, timing, and apparent direction and velocity. Both the explosion jets and their wave fields are replicated well by numerical simulations of supersonic jets impulsively released from a pressurized vessel. The scaled acoustic signal from one explosion at Stromboli displays a frequency pattern with an excellent match to those from the simulated jets. We conclude that both the observed waves and the audible sound from the explosions are jet noise, i.e., the typical acoustic field radiating from high-velocity jets. Volcanic jet noise was previously quantified only in the infrasonic emissions from large, sub-Plinian to Plinian eruptions. Our combined approach allows us to define the spatial and temporal evolution of audible jet noise from supersonic jets in small-scale volcanic eruptions.

  15. Efficient 3D texture feature extraction from CT images for computer-aided diagnosis of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Liang, Zhengrong; Zhao, Hong

    2014-03-01

    Texture feature from chest CT images for malignancy assessment of pulmonary nodules has become an un-ignored and efficient factor in Computer-Aided Diagnosis (CADx). In this paper, we focus on extracting as fewer as needed efficient texture features, which can be combined with other classical features (e.g. size, shape, growing rate, etc.) for assisting lung nodule diagnosis. Based on a typical calculation algorithm of texture features, namely Haralick features achieved from the gray-tone spatial-dependence matrices, we calculated two dimensional (2D) and three dimensional (3D) Haralick features from the CT images of 905 nodules. All of the CT images were downloaded from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is the largest public chest database. 3D Haralick feature model of thirteen directions contains more information from the relationships on the neighbor voxels of different slices than 2D features from only four directions. After comparing the efficiencies of 2D and 3D Haralick features applied on the diagnosis of nodules, principal component analysis (PCA) algorithm was used to extract as fewer as needed efficient texture features. To achieve an objective assessment of the texture features, the support vector machine classifier was trained and tested repeatedly for one hundred times. And the statistical results of the classification experiments were described by an average receiver operating characteristic (ROC) curve. The mean value (0.8776) of the area under the ROC curves in our experiments can show that the two extracted 3D Haralick projected features have the potential to assist the classification of benign and malignant nodules.

  16. Extraction of tidal streams from a ship-borne acoustic Doppler current profiler using a statistical-dynamical model

    NASA Astrophysics Data System (ADS)

    Dowd, Michael; Thompson, Keith R.

    1996-04-01

    We present a method for extracting the barotropic tide directly from the time-space series of horizontal velocity obtained by a ship-borne acoustic Doppler current profiler (ADCP). The method is conceptually straightforward, easy to implement, and suitable for operational use. It involves fitting a limited area tidal model, based on the linearized depth-averaged shallow water equations, to the ADCP record. The flows across the open boundaries of the model domain are assumed periodic in time with known frequencies corresponding to the tidal constituents of interest. The unknown tidal amplitudes and phases at the boundary are estimated from interior ADCP velocities using an inverse method; the solution of the shallow water equations is posed as a boundary value problem in the frequency domain, and the estimation procedure is based on generalized least squares regression. Results obtained include tidal maps, a tidal residual series, and associated error estimates. An application of the method to ship ADCP data collected on a cruise to the Western Bank region of the Scotian Shelf off the east coast of Canada is described. The tidal estimates and the residual field obtained are verified by comparison to other data collected during the cruise. The residual circulation shows an anticyclonic gyre centered on the crest of Western Bank and a northward current to the west of this region.

  17. Pelvis feature extraction and classification of Cardiff body match rig base measurements for input into a knowledge-based system.

    PubMed

    Partlow, Adam; Gibson, Colin; Kulon, Janusz; Wilson, Ian; Wilcox, Steven

    2012-11-01

    The purpose of this paper is to determine whether it is possible to use an automated measurement tool to clinically classify clients who are wheelchair users with severe musculoskeletal deformities, replacing the current process which relies upon clinical engineers with advanced knowledge and skills. Clients' body shapes were captured using the Cardiff Body Match (CBM) Rig developed by the Rehabilitation Engineering Unit (REU) at Rookwood Hospital in Cardiff. A bespoke feature extraction algorithm was developed that estimates the position of external landmarks on clients' pelvises so that useful measurements can be obtained. The outputs of the feature extraction algorithms were compared to CBM measurements where the positions of the client's pelvis landmarks were known. The results show that using the extracted features facilitated classification. Qualitative analysis showed that the estimated positions of the landmark points were close enough to their actual positions to be useful to clinicians undertaking clinical assessments.

  18. A MapReduce scheme for image feature extraction and its application to man-made object detection

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Chen, Honghui

    2013-07-01

    A fundamental challenge in image engineering is how to locate interested objects from high-resolution images with efficient detection performance. Several man-made objects detection approaches have been proposed while the majority of these methods are not truly timesaving and suffer low degree of detection precision. To address this issue, we propose a novel approach for man-made object detection in aerial image involving MapReduce scheme for large scale image analysis to support image feature extraction, which can be widely used to compute-intensive tasks in a highly parallel way, and texture feature extraction and clustering. Comprehensive experiments show that the parallel framework saves voluminous time for feature extraction with satisfied objects detection performance.

  19. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  20. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  1. Diagnostic efficacy of computer extracted image features in optical coherence tomography of the precancerous cervix

    PubMed Central

    Kang, Wei; Qi, Xin; Tresser, Nancy J.; Kareta, Margarita; Belinson, Jerome L.; Rollins, Andrew M.

    2011-01-01

    Purpose: To determine the diagnostic efficacy of optical coherence tomography (OCT) to identify cervical intraepithelial neoplasia (CIN) grade 2 or higher by computer-aided diagnosis (CADx). Methods: OCT has been investigated as a screening∕diagnostic tool in the management of preinvasive and early invasive cancers of the uterine cervix. In this study, an automated algorithm was developed to extract OCT image features and identify CIN 2 or higher. First, the cervical epithelium was detected by a combined watershed and active contour method. Second, four features were calculated: The thickness of the epithelium and its standard deviation and the contrast between the epithelium and the stroma and its standard deviation. Finally, linear discriminant analysis was applied to classify images into two categories: Normal∕inflammation∕CIN 1 and CIN 2∕CIN 3. The algorithm was applied to 152 images (74 patients) obtained from an international study. Results: The numbers of normal∕inflammatory∕CIN 1∕CIN 2∕CIN 3 images are 74, 29, 14, 24, and 11, respectively. Tenfold cross-validation predicted the algorithm achieved a sensitivity of 51% (95% CI: 36%–67%) and a specificity of 92% (95% CI: 86%–96%) with an empirical two-category prior probability estimated from the data set. Receiver operating characteristic analysis yielded an area under the curve of 0.86. Conclusions: The diagnostic efficacy of CADx in OCT imaging to differentiate high-grade CIN from normal∕low grade CIN is demonstrated. The high specificity of OCT with CADx suggests further investigation as an effective secondary screening tool when combined with a highly sensitive primary screening tool. PMID:21361180

  2. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  3. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  4. Feature extraction and wall motion classification of 2D stress echocardiography with support vector machines

    NASA Astrophysics Data System (ADS)

    Chykeyuk, Kiryl; Clifton, David A.; Noble, J. Alison

    2011-03-01

    Stress echocardiography is a common clinical procedure for diagnosing heart disease. Clinically, diagnosis of the heart wall motion depends mostly on visual assessment, which is highly subjective and operator-dependent. Introduction of automated methods for heart function assessment have the potential to minimise the variance in operator assessment. Automated wall motion analysis consists of two main steps: (i) segmentation of heart wall borders, and (ii) classification of heart function as either "normal" or "abnormal" based on the segmentation. This paper considers automated classification of rest and stress echocardiography. Most previous approaches to the classification of heart function have considered rest or stress data separately, and have only considered using features extracted from the two main frames (corresponding to the end-of-diastole and end-of-systole). One previous attempt [1] has been made to combine information from rest and stress sequences utilising a Hidden Markov Model (HMM), which has proven to be the best performing approach to date. Here, we propose a novel alternative feature selection approach using combined information from rest and stress sequences for motion classification of stress echocardiography, utilising a Support Vector Machines (SVM) classifier. We describe how the proposed SVM-based method overcomes difficulties that occur with HMM classification. Overall accuracy with the new method for global wall motion classification using datasets from 173 patients is 92.47%, and the accuracy of local wall motion classification is 87.20%, showing that the proposed method outperforms the current state-of-the-art HMM-based approach (for which global and local classification accuracy is 82.15% and 78.33%, respectively).

  5. Extraction of morphological features from biological models and cells by Fourier analysis of static light scatter measurements

    SciTech Connect

    Burger, D.E.; Jett, J.H.; Mullaney, P.F.

    1982-03-01

    Models of biological cells of varying geometric complexity were used to generate data to test a method of extracting geometric features from light scatter distributions. Measurements of the dynamic range and angular distribution of intensity and light scatter from these models was compared to the distributions predicted by a complete theory of light scatter (Mie) and by diffraction theory (Fraunhofer). An approximation to the Fraunhofer theory provides a means of obtaining size and shape features from the data by a spectrum analysis. Experimental verification using nucleated erythrocytes as the biological material show the potential application of this method for the extraction of important size and shape parameters from light scatter data.

  6. Sparsity-enabled signal decomposition using tunable Q-factor wavelet transform for fault feature extraction of gearbox

    NASA Astrophysics Data System (ADS)

    Cai, Gaigai; Chen, Xuefeng; He, Zhengjia

    2013-12-01

    Localized faults in gearboxes tend to result in periodic shocks and thus arouse periodic responses in vibration signals. Feature extraction has always been a key problem for localized fault diagnosis. This paper proposes a new fault feature extraction technique for gearboxes by using sparsity-enabled signal decomposition method. The sparsity-enabled signal decomposition method separates signals based on the oscillatory behavior of the signal rather than the frequency or scale. Thus, the fault feature can be nonlinearly extracted from vibration signals. During the implementation of the proposed method, tunable Q-factor wavelet transform, for which the Q-factor can be easily specified, is adopted to represent vibration signals in a sparse way, and then morphological component analysis (MCA) is employed to estimate and separate the distinct components. The corresponding optimization problem of MCA is solved by the split augmented Lagrangian shrinkage algorithm (SALSA). With the proposed method, vibration signals of the faulty gearbox can be nonlinearly decomposed into high-oscillatory component and low-oscillatory component which is the fault feature of gearboxes. To evaluate the performance of the proposed method, this paper investigates the effect of two parameters pertinent to MCA and SALSA: the Lagrange multiplier and the penalty parameter. The effectiveness of the proposed method is verified by both the simulated and practical gearbox vibration signals. Results show the proposed method outperforms empirical mode decomposition and spectral kurtosis in extracting fault features of gearboxes.

  7. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  8. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  9. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  10. Modeling ground vehicle acoustic signatures for analysis and synthesis

    SciTech Connect

    Haschke, G.; Stanfield, R.

    1995-07-01

    Security and weapon systems use acoustic sensor signals to classify and identify moving ground vehicles. Developing robust signal processing algorithms for this is expensive, particularly in presence of acoustic clutter or countermeasures. This paper proposes a parametric ground vehicle acoustic signature model to aid the system designer in understanding which signature features are important, developing corresponding feature extraction algorithms and generating low-cost, high-fidelity synthetic signatures for testing. The authors have proposed computer-generated acoustic signatures of armored, tracked ground vehicles to deceive acoustic-sensored smart munitions. They have developed quantitative measures of how accurately a synthetic acoustic signature matches those produced by actual vehicles. This paper describes parameters of the model used to generate these synthetic signatures and suggests methods for extracting these parameters from signatures of valid vehicle encounters. The model incorporates wide-bandwidth and narrow- bandwidth components that are modulated in a pseudo-random fashion to mimic the time dynamics of valid vehicle signatures. Narrow- bandwidth feature extraction techniques estimate frequency, amplitude and phase information contained in a single set of narrow frequency- band harmonics. Wide-bandwidth feature extraction techniques estimate parameters of a correlated-noise-floor model. Finally, the authors propose a method of modeling the time dynamics of the harmonic amplitudes as a means adding necessary time-varying features to the narrow-bandwidth signal components. The authors present results of applying this modeling technique to acoustic signatures recorded during encounters with one armored, tracked vehicle. Similar modeling techniques can be applied to security systems.

  11. Pathologic stratification of operable lung adenocarcinoma using radiomics features extracted from dual energy CT images

    PubMed Central

    Lee, Ho Yun; Sohn, Insuk; Kim, Hye Seung; Son, Ji Ye; Kwon, O Jung; Choi, Joon Young; Lee, Kyung Soo; Shim, Young Mog

    2017-01-01

    Purpose To evaluate the usefulness of surrogate biomarkers as predictors of histopathologic tumor grade and aggressiveness using radiomics data from dual-energy computed tomography (DECT), with the ultimate goal of accomplishing stratification of early-stage lung adenocarcinoma for optimal treatment. Results Pathologic grade was divided into grades 1, 2, and 3. Multinomial logistic regression analysis revealed i-uniformity and 97.5th percentile CT attenuation value as independent significant factors to stratify grade 2 or 3 from grade 1. The AUC value calculated from leave-one-out cross-validation procedure for discriminating grades 1, 2, and 3 was 0.9307 (95% CI: 0.8514–1), 0.8610 (95% CI: 0.7547–0.9672), and 0.8394 (95% CI: 0.7045–0.9743), respectively. Materials and Methods A total of 80 patients with 91 clinically and radiologically suspected stage I or II lung adenocarcinoma were prospectively enrolled. All patients underwent DECT and F-18-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT, followed by surgery. Quantitative CT and PET imaging characteristics were evaluated using a radiomics approach. Significant features for a tumor aggressiveness prediction model were extracted and used to calculate diagnostic performance for predicting all pathologic grades. Conclusions Quantitative radiomics values from DECT imaging metrics can help predict pathologic aggressiveness of lung adenocarcinoma. PMID:27880938

  12. Image Outlier Detection and Feature Extraction via L1-Norm-Based 2D Probabilistic PCA.

    PubMed

    Ju, Fujiao; Sun, Yanfeng; Gao, Junbin; Hu, Yongli; Yin, Baocai

    2015-12-01

    This paper introduces an L1-norm-based probabilistic principal component analysis model on 2D data (L1-2DPPCA) based on the assumption of the Laplacian noise model. The Laplacian or L1 density function can be expressed as a superposition of an infinite number of Gaussian distributions. Under this expression, a Bayesian inference can be established based on the variational expectation maximization approach. All the key parameters in the probabilistic model can be learned by the proposed variational algorithm. It has experimentally been demonstrated that the newly introduced hidden variables in the superposition can serve as an effective indicator for data outliers. Experiments on some publicly available databases show that the performance of L1-2DPPCA has largely been improved after identifying and removing sample outliers, resulting in more accurate image reconstruction than the existing PCA-based methods. The performance of feature extraction of the proposed method generally outperforms other existing algorithms in terms of reconstruction errors and classification accuracy.

  13. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  14. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  15. Adaptive Redundant Lifting Wavelet Transform Based on Fitting for Fault Feature Extraction of Roller Bearings

    PubMed Central

    Yang, Zijing; Cai, Ligang; Gao, Lixin; Wang, Huaqing

    2012-01-01

    A least square method based on data fitting is proposed to construct a new lifting wavelet, together with the nonlinear idea and redundant algorithm, the adaptive redundant lifting transform based on fitting is firstly stated in this paper. By variable combination selections of basis function, sample number and dimension of basis function, a total of nine wavelets with different characteristics are constructed, which are respectively adopted to perform redundant lifting wavelet transforms on low-frequency approximate signals at each layer. Then the normalized lP norms of the new node-signal obtained through decomposition are calculated to adaptively determine the optimal wavelet for the decomposed approximate signal. Next, the original signal is taken for subsection power spectrum analysis to choose the node-signal for single branch reconstruction and demodulation. Experiment signals and engineering signals are respectively used to verify the above method and the results show that bearing faults can be diagnosed more effectively by the method presented here than by both spectrum analysis and demodulation analysis. Meanwhile, compared with the symmetrical wavelets constructed with Lagrange interpolation algorithm, the asymmetrical wavelets constructed based on data fitting are more suitable in feature extraction of fault signal of roller bearings. PMID:22666035

  16. Extraction and evaluation of gas-flow-dependent features from dynamic measurements of gas sensors array

    NASA Astrophysics Data System (ADS)

    Kalinowski, Paweł; Woźniak, Łukasz; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    Gas analyzers based on gas sensors are the devices which enable recognition of various kinds of volatile compounds. They have continuously been developed and investigated for over three decades, however there are still limitations which slow down the implementation of those devices in many applications. For example, the main drawbacks are the lack of selectivity, sensitivity and long term stability of those devices caused by the drift of utilized sensors. This implies the necessity of investigations not only in the field of development of gas sensors construction, but also the development of measurement procedures or methods of analysis of sensor responses which compensate the limitations of sensors devices. One of the fields of investigations covers the dynamic measurements of sensors or sensor-arrays response with the utilization of flow modulation techniques. Different gas delivery patterns enable the possibility of extraction of unique features which improves the stability and selectivity of gas detecting systems. In this article three utilized flow modulation techniques are presented, together with the proposition of the evaluation method of their usefulness and robustness in environmental pollutants detecting systems. The results of dynamic measurements of an commercially available TGS sensor array in the presence of nitrogen dioxide and ammonia are shown.

  17. A spatial division clustering method and low dimensional feature extraction technique based indoor positioning system.

    PubMed

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-22

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect.

  18. A new feature extraction method for signal classification applied to cord dorsum potential detection

    NASA Astrophysics Data System (ADS)

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  19. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  20. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 2: frequency domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    Knowledge of the structure of a flow is really significant for the proper conduct a number of industrial processes. In this case a description of a two-phase flow regimes is possible by use of the time-series analysis e.g. in frequency domain. In this article the classical spectral analysis based on Fourier Transform (FT) and Short-Time Fourier Transform (STFT) were applied for analysis of signals obtained for water-air flow using gamma ray absorption. The presented method was illustrated by use data collected in experiments carried out on the laboratory hydraulic installation with a horizontal pipe of 4.5 m length and inner diameter of 30 mm equipped with two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals. Stochastic signals obtained from detectors for plug, bubble, and transitional plug - bubble flows were considered in this work. The recorded raw signals were analyzed and several features in the frequency domain were extracted using autospectral density function (ADF), cross-spectral density function (CSDF), and the STFT spectrogram. In result of a detail analysis it was found that the most promising to recognize of the flow structure are: maximum value of the CSDF magnitude, sum of the CSDF magnitudes in the selected frequency range, and the maximum value of the sum of selected amplitudes of STFT spectrogram.

  1. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  2. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  3. Acoustic neuroma

    MedlinePlus

    Vestibular schwannoma; Tumor - acoustic; Cerebellopontine angle tumor; Angle tumor; Hearing loss - acoustic; Tinnitus - acoustic ... Acoustic neuromas have been linked with the genetic disorder neurofibromatosis type 2 (NF2). Acoustic neuromas are uncommon.

  4. Texture feature extraction based on wavelet transform and gray-level co-occurrence matrices applied to osteosarcoma diagnosis.

    PubMed

    Hu, Shan; Xu, Chao; Guan, Weiqiao; Tang, Yong; Liu, Yana

    2014-01-01

    Osteosarcoma is the most common malignant bone tumor among children and adolescents. In this study, image texture analysis was made to extract texture features from bone CR images to evaluate the recognition rate of osteosarcoma. To obtain the optimal set of features, Sym4 and Db4 wavelet transforms and gray-level co-occurrence matrices were applied to the image, with statistical methods being used to maximize the feature selection. To evaluate the performance of these methods, a support vector machine algorithm was used. The experimental results demonstrated that the Sym4 wavelet had a higher classification accuracy (93.44%) than the Db4 wavelet with respect to osteosarcoma occurrence in the epiphysis, whereas the Db4 wavelet had a higher classification accuracy (96.25%) for osteosarcoma occurrence in the diaphysis. Results including accuracy, sensitivity, specificity and ROC curves obtained using the wavelets were all higher than those obtained using the features derived from the GLCM method. It is concluded that, a set of texture features can be extracted from the wavelets and used in computer-aided osteosarcoma diagnosis systems. In addition, this study also confirms that multi-resolution analysis is a useful tool for texture feature extraction during bone CR image processing.

  5. Flow and Acoustic Features of a Mach 0.9 Free Jet Using High-Frequency Excitation

    NASA Astrophysics Data System (ADS)

    Upadhyay, Puja; Alvi, Farrukh

    2016-11-01

    This study focuses on active control of a Mach 0.9 (ReD = 6 ×105) free jet using high-frequency excitation for noise reduction. Eight resonance-enhanced microjet actuators with nominal frequencies of 25 kHz (StD 2 . 2) are used to excite the shear layer at frequencies that are approximately an order of magnitude higher than the jet preferred frequency. The influence of control on mean and turbulent characteristics of the jet is studied using Particle Image Velocimetry. Additionally, far-field acoustic measurements are acquired to estimate the effect of pulsed injection on noise characteristics of the jet. Flow field measurements revealed that strong streamwise vortex pairs, formed as a result of control, result in a significantly thicker initial shear layer. This excited shear layer is also prominently undulated, resulting in a modified initial velocity profile. Also, the distribution of turbulent kinetic energy revealed that forcing results in increased turbulence levels for near-injection regions, followed by a global reduction for all downstream locations. Far-field acoustic measurements showed noise reductions at low to moderate frequencies. Additionally, an increase in high-frequency noise, mostly dominated by the actuators' resonant noise, was observed. AFOSR and ARO.

  6. Fault feature extraction of gearbox by using overcomplete rational dilation discrete wavelet transform on signals measured from vibration sensors

    NASA Astrophysics Data System (ADS)

    Chen, Binqiang; Zhang, Zhousuo; Sun, Chuang; Li, Bing; Zi, Yanyang; He, Zhengjia

    2012-11-01

    Gearbox fault diagnosis is very important for preventing catastrophic accidents. Vibration signals of gearboxes measured by sensors are useful and dependable as they carry key information related to the mechanical faults in gearboxes. Effective signal processing techniques are in necessary demands to extract the fault features contained in the collected gearbox vibration signals. Overcomplete rational dilation discrete wavelet transform (ORDWT) enjoys attractive properties such as better shift-invariance, adjustable time-frequency distributions and flexible wavelet atoms of tunable oscillation in comparison with classical dyadic wavelet transform (DWT). Due to these advantages, ORDWT is presented as a versatile tool that can be adapted to analysis of gearbox fault features of different types, especially in analyzing the non-stationary and transient characteristics of the signals. Aiming to extract the various types of fault features confronted in gearbox fault diagnosis, a fault feature extraction technique based on ORDWT is proposed in this paper. In the routine of the proposed technique, ORDWT is used as the pre-processing decomposition tool, and a corresponding post-processing method is combined with ORDWT to extract the fault feature of a specific type. For extracting periodical impulses in the signal, an impulse matching algorithm is presented. In this algorithm, ORDWT bases of varied time-frequency distributions and varied oscillatory natures are adopted, moreover an improved signal impulsiveness measure derived from kurtosis is developed for choosing optimal ORDWT bases that perfectly match the hidden periodical impulses. For demodulation purpose, an improved instantaneous time-frequency spectrum (ITFS), based on the combination of ORDWT and Hilbert transform, is presented. For signal denoising applications, ORDWT is enhanced by neighboring coefficient shrinkage strategy as well as subband selection step to reveal the buried transient vibration contents. The

  7. Robust sensing of approaching vehicles relying on acoustic cues.

    PubMed

    Mizumachi, Mitsunori; Kaminuma, Atsunobu; Ono, Nobutaka; Ando, Shigeru

    2014-05-30

    The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle.

  8. Robust Sensing of Approaching Vehicles Relying on Acoustic Cues

    PubMed Central

    Mizumachi, Mitsunori; Kaminuma, Atsunobu; Ono, Nobutaka; Ando, Shigeru

    2014-01-01

    The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle. PMID:24887038

  9. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these

  10. [The research on separating and extracting overlapping spectral feature lines in LIBS using damped least squares method].

    PubMed

    Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo

    2015-02-01

    In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral

  11. Real-time feature extraction of P300 component using adaptive nonlinear principal component analysis

    PubMed Central

    2011-01-01

    Background The electroencephalography (EEG) signals are known to involve the firings of neurons in the brain. The P300 wave is a high potential caused by an event-related stimulus. The detection of P300s included in the measured EEG signals is widely investigated. The difficulties in detecting them are that they are mixed with other signals generated over a large brain area and their amplitudes are very small due to the distance and resistivity differences in their transmittance. Methods A novel real-time feature extraction method for detecting P300 waves by combining an adaptive nonlinear principal component analysis (ANPCA) and a multilayer neural network is proposed. The measured EEG signals are first filtered using a sixth-order band-pass filter with cut-off frequencies of 1 Hz and 12 Hz. The proposed ANPCA scheme consists of four steps: pre-separation, whitening, separation, and estimation. In the experiment, four different inter-stimulus intervals (ISIs) are utilized: 325 ms, 350 ms, 375 ms, and 400 ms. Results The developed multi-stage principal component analysis method applied at the pre-separation step has reduced the external noises and artifacts significantly. The introduced adaptive law in the whitening step has made the subsequent algorithm in the separation step to converge fast. The separation performance index has varied from -20 dB to -33 dB due to randomness of source signals. The robustness of the ANPCA against background noises has been evaluated by comparing the separation performance indices of the ANPCA with four algorithms (NPCA, NSS-JD, JADE, and SOBI), in which the ANPCA algorithm demonstrated the shortest iteration time with performance index about 0.03. Upon this, it is asserted that the ANPCA algorithm successfully separates mixed source signals. Conclusions The independent components produced from the observed data using the proposed method illustrated that the extracted signals were clearly the P300 components elicited by task

  12. Fault feature extraction of planet gear in wind turbine gearbox based on spectral kurtosis and time wavelet energy spectrum

    NASA Astrophysics Data System (ADS)

    Kong, Yun; Wang, Tianyang; Li, Zheng; Chu, Fulei

    2017-01-01

    Planetary transmission plays a vital role in wind turbine drivetrains, and its fault diagnosis has been an important and challenging issue. Owing to the complicated and coupled vibration source, time-variant vibration transfer path, and heavy background noise masking effect, the vibration signal of planet gear in wind turbine gearboxes exhibits several unique characteristics: Complex frequency components, low signal-to-noise ratio, and weak fault feature. In this sense, the periodic impulsive components induced by a localized defect are hard to extract, and the fault detection of planet gear in wind turbines remains to be a challenging research work. Aiming to extract the fault feature of planet gear effectively, we propose a novel feature extraction method based on spectral kurtosis and time wavelet energy spectrum (SK-TWES) in the paper. Firstly, the spectral kurtosis (SK) and kurtogram of raw vibration signals are computed and exploited to select the optimal filtering parameter for the subsequent band-pass filtering. Then, the band-pass filtering is applied to extrude periodic transient impulses using the optimal frequency band in which the corresponding SK value is maximal. Finally, the time wavelet energy spectrum analysis is performed on the filtered signal, selecting Morlet wavelet as the mother wavelet which possesses a high similarity to the impulsive components. The experimental signals collected from the wind turbine gearbox test rig demonstrate that the proposed method is effective at the feature extraction and fault diagnosis for the planet gear with a localized defect.

  13. Target Identification Using Wavelet-based Feature Extraction and Neural Network Classifiers

    DTIC Science & Technology

    1999-01-01

    classification using a multilayer feedforward neural network for vehicle classification. This effort is part of a larger project aimed at developing an...Integrated Vehicle Classification System Using Wavelet I Neural Network Processing of Acoustic/Seismic Emissions on a Windows PC performed under a Phase II...forward neural network vehicle classifier employing the Levenberg-Marquardt deterministic optimization learning scheme will be presented.

  14. Acoustic biosensors

    PubMed Central

    Fogel, Ronen; Seshia, Ashwin A.

    2016-01-01

    Resonant and acoustic wave devices have been researched for several decades for application in the gravimetric sensing of a variety of biological and chemical analytes. These devices operate by coupling the measurand (e.g. analyte adsorption) as a modulation in the physical properties of the acoustic wave (e.g. resonant frequency, acoustic velocity, dissipation) that can then be correlated with the amount of adsorbed analyte. These devices can also be miniaturized with advantages in terms of cost, size and scalability, as well as potential additional features including integration with microfluidics and electronics, scaled sensitivities associated with smaller dimensions and higher operational frequencies, the ability to multiplex detection across arrays of hundreds of devices embedded in a single chip, increased throughput and the ability to interrogate a wider range of modes including within the same device. Additionally, device fabrication is often compatible with semiconductor volume batch manufacturing techniques enabling cost scalability and a high degree of precision and reproducibility in the manufacturing process. Integration with microfluidics handling also enables suitable sample pre-processing/separation/purification/amplification steps that could improve selectivity and the overall signal-to-noise ratio. Three device types are reviewed here: (i) bulk acoustic wave sensors, (ii) surface acoustic wave sensors, and (iii) micro/nano-electromechanical system (MEMS/NEMS) sensors. PMID:27365040

  15. Acoustic biosensors.

    PubMed

    Fogel, Ronen; Limson, Janice; Seshia, Ashwin A

    2016-06-30

    Resonant and acoustic wave devices have been researched for several decades for application in the gravimetric sensing of a variety of biological and chemical analytes. These devices operate by coupling the measurand (e.g. analyte adsorption) as a modulation in the physical properties of the acoustic wave (e.g. resonant frequency, acoustic velocity, dissipation) that can then be correlated with the amount of adsorbed analyte. These devices can also be miniaturized with advantages in terms of cost, size and scalability, as well as potential additional features including integration with microfluidics and electronics, scaled sensitivities associated with smaller dimensions and higher operational frequencies, the ability to multiplex detection across arrays of hundreds of devices embedded in a single chip, increased throughput and the ability to interrogate a wider range of modes including within the same device. Additionally, device fabrication is often compatible with semiconductor volume batch manufacturing techniques enabling cost scalability and a high degree of precision and reproducibility in the manufacturing process. Integration with microfluidics handling also enables suitable sample pre-processing/separation/purification/amplification steps that could improve selectivity and the overall signal-to-noise ratio. Three device types are reviewed here: (i) bulk acoustic wave sensors, (ii) surface acoustic wave sensors, and (iii) micro/nano-electromechanical system (MEMS/NEMS) sensors.

  16. Speech Music Discrimination Using Class-Specific Features

    DTIC Science & Technology

    2004-08-01

    Speech Music Discrimination Using Class-Specific Features Thomas Beierholm...between speech and music . Feature extraction is class-specific and can therefore be tailored to each class meaning that segment size, model orders...interest. Some of the applications of audio signal classification are speech/ music classification [1], acoustical environmental classification [2][3

  17. Robust Neighborhood Preserving Projection by Nuclear/L2,1-Norm Regularization for Image Feature Extraction.

    PubMed

    Zhang, Zhao; Li, Fanzhang; Zhao, Mingbo; Zhang, Li; Yan, Shuicheng

    2017-04-01

    We propose two nuclear- and L2,1-norm regularized 2D neighborhood preserving projection (2DNPP) methods for extracting representative 2D image features. 2DNPP extracts neighborhood preserving features by minimizing a Frobenius norm-based reconstruction error that is very sensitive noise and outliers in given data. To make the distance metric more reliable and robust, and encode the neighborhood reconstruction error more accurately, we minimize the nuclear- and L2,1-norm-based reconstruction error, respectively and measure it over each image. Technically, we propose two enhanced variants of 2DNPP, nuclear-norm-based 2DNPP and sparse reconstruction-based 2DNPP. Besides, to optimize the projection for more promising feature extraction, we also add the nuclear- and sparse L2,1-norm constraints on it accordingly, where L2,1-norm ensures the projection to be sparse in rows so that discriminative features are learnt in the latent subspace and the nuclear-norm ensures the low-rank property of features by projecting data into their respective subspaces. By fully considering the neighborhood preserving power, using more reliable and robust distance metric, and imposing the low-rank or sparse constraints on projections at the same time, our methods can outperform related state-of-the-arts in a variety of simulation settings.

  18. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  19. Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fenando

    2003-01-01

    In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.

  20. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  1. Effect of acoustic frequency and power density on the aqueous ultrasonic-assisted extraction of grape pomace (Vitis vinifera L.) - a response surface approach.

    PubMed

    González-Centeno, María Reyes; Knoerzer, Kai; Sabarez, Henry; Simal, Susana; Rosselló, Carmen; Femenia, Antoni

    2014-11-01

    Aqueous ultrasound-assisted extraction (UAE) of grape pomace was investigated by Response Surface Methodology (RSM) to evaluate the effect of acoustic frequency (40, 80, 120kHz), ultrasonic power density (50, 100, 150W/L) and extraction time (5, 15, 25min) on total phenolics, total flavonols and antioxidant capacity. All the process variables showed a significant effect on the aqueous UAE of grape pomace (p<0.05). The Box-Behnken Design (BBD) generated satisfactory mathematical models which accurately explain the behavior of the system; allowing to predict both the extraction yield of phenolic and flavonol compounds, and also the antioxidant capacity of the grape pomace extracts. The optimal UAE conditions for all response factors were a frequency of 40kHz, a power density of 150W/L and 25min of extraction time. Under these conditions, the aqueous UAE would achieve a maximum of 32.31mg GA/100g fw for total phenolics and 2.04mg quercetin/100g fw for total flavonols. Regarding the antioxidant capacity, the maximum predicted values were 53.47 and 43.66mg Trolox/100g fw for CUPRAC and FRAP assays, respectively. When comparing with organic UAE, in the present research, from 12% to 38% of total phenolic bibliographic values were obtained, but using only water as the extraction solvent, and applying lower temperatures and shorter extraction times. To the best of the authors' knowledge, no studies specifically addressing the optimization of both acoustic frequency and power density during aqueous-UAE of plant materials have been previously published.

  2. Multiple-input multiple-output (MIMO) analog-to-feature converter chipsets for sub-wavelength acoustic source localization and bearing estimation

    NASA Astrophysics Data System (ADS)

    Chakrabartty, Shantanu

    2010-04-01

    Localization of acoustic sources using miniature microphone arrays poses a significant challenge due to fundamental limitations imposed by the physics of sound propagation. With sub-wavelength distances between the microphones, resolving acute localization cues become difficult due to precision artifacts. In this work, we present the design of a miniature, microphone array sensor based on a patented Multiple-input Multiple-output (MIMO) analog-to-feature converter (AFC) chip-sets which overcomes the limitations due to precision artifacts. Measured results from fabricated prototypes demonstrate a bearing range of 0 degrees to 90 degrees with a resolution less than 2 degrees. The power dissipation of the MIMO-ADC chip-set for this task was measured to be less than 75 microwatts making it ideal for portable, battery powered sniper and gunshot detection applications.

  3. Development of feature extraction analysis for a multi-functional optical profiling device applied to field engineering applications

    NASA Astrophysics Data System (ADS)

    Han, Xu; Xie, Guangping; Laflen, Brandon; Jia, Ming; Song, Guiju; Harding, Kevin G.

    2015-05-01

    In the real application environment of field engineering, a large variety of metrology tools are required by the technician to inspect part profile features. However, some of these tools are burdensome and only address a sole application or measurement. In other cases, standard tools lack the capability of accessing irregular profile features. Customers of field engineering want the next generation metrology devices to have the ability to replace the many current tools with one single device. This paper will describe a method based on the ring optical gage concept to the measurement of numerous kinds of profile features useful for the field technician. The ring optical system is composed of a collimated laser, a conical mirror and a CCD camera. To be useful for a wide range of applications, the ring optical system requires profile feature extraction algorithms and data manipulation directed toward real world applications in field operation. The paper will discuss such practical applications as measuring the non-ideal round hole with both off-centered and oblique axes. The algorithms needed to analyze other features such as measuring the width of gaps, radius of transition fillets, fall of step surfaces, and surface parallelism will also be discussed in this paper. With the assistance of image processing and geometric algorithms, these features can be extracted with a reasonable performance. Tailoring the feature extraction analysis to this specific gage offers the potential for a wider application base beyond simple inner diameter measurements. The paper will present experimental results that are compared with standard gages to prove the performance and feasibility of the analysis in real world field engineering. Potential accuracy improvement methods, a new dual ring design and future work will be discussed at the end of this paper.

  4. Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform

    PubMed Central

    Xu, Huile; Liu, Jinyi; Hu, Haibo; Zhang, Yi

    2016-01-01

    Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT) or wavelet transform (WT). However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT) for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA) and instantaneous frequency (IF) by means of empirical mode decomposition (EMD), as well as instantaneous energy density (IE) and marginal spectrum (MS) derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works. PMID:27918414

  5. Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform.

    PubMed

    Xu, Huile; Liu, Jinyi; Hu, Haibo; Zhang, Yi

    2016-12-02

    Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT) or wavelet transform (WT). However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT) for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA) and instantaneous frequency (IF) by means of empirical mode decomposition (EMD), as well as instantaneous energy density (IE) and marginal spectrum (MS) derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works.

  6. A Measurement Method for Large Parts Combining with Feature Compression Extraction and Directed Edge-Point Criterion

    PubMed Central

    Liu, Wei; Zhang, Yang; Yang, Fan; Gao, Peng; Lan, Zhiguang; Jia, Zhenyuan; Gao, Hang

    2016-01-01

    High-accuracy surface measurement of large aviation parts is a significant guarantee