Science.gov

Sample records for acoustic feature extraction

  1. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

      Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...

  2. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    NASA Astrophysics Data System (ADS)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  3. Adaptive feature extraction expert

    SciTech Connect

    Yuschik, M.

    1983-01-01

    The identification of discriminatory features places an upper bound on the recognition rate of any automatic speech recognition (ASR) system. One way to structure the extraction of features is to construct an expert system which applies a set of rules to identify particular properties of the speech patterns. However, these patterns vary for an individual speaker and from speaker to speaker so that another expert is actually needed to learn the new variations. The author investigates the problem by using sets of discriminatory features that are suggested by a feature generation expert, improves the selectivity of these features with a training expert, and finally develops a minimally spanning feature set with a statistical selection expert. 12 references.

  4. Feature based passive acoustic detection of underwater threats

    NASA Astrophysics Data System (ADS)

    Stolkin, Rustam; Sutin, Alexander; Radhakrishnan, Sreeram; Bruno, Michael; Fullerton, Brian; Ekimov, Alexander; Raftery, Michael

    2006-05-01

    Stevens Institute of Technology is performing research aimed at determining the acoustical parameters that are necessary for detecting and classifying underwater threats. This paper specifically addresses the problems of passive acoustic detection of small targets in noisy urban river and harbor environments. We describe experiments to determine the acoustic signatures of these threats and the background acoustic noise. Based on these measurements, we present an algorithm for robustly discriminating threat presence from severe acoustic background noise. Measurements of the target's acoustic radiation signal were conducted in the Hudson River. The acoustic noise in the Hudson River was also recorded for various environmental conditions. A useful discriminating feature can be extracted from the acoustic signal of the threat, calculated by detecting packets of multi-spectral high frequency sound which occur repetitively at low frequency intervals. We use experimental data to show how the feature varies with range between the sensor and the detected underwater threat. We also estimate the effective detection range by evaluating this feature for hydrophone signals, recorded in the river both with and without threat presence.

  5. Feature extraction through LOCOCODE.

    PubMed

    Hochreiter, S; Schmidhuber, J

    1999-04-01

    Low-complexity coding and decoding (LOCOCODE) is a novel approach to sensory coding and unsupervised learning. Unlike previous methods, it explicitly takes into account the information-theoretic complexity of the code generator. It computes lococodes that convey information about the input data and can be computed and decoded by low-complexity mappings. We implement LOCOCODE by training autoassociators with flat minimum search, a recent, general method for discovering low-complexity neural nets. It turns out that this approach can unmix an unknown number of independent data sources by extracting a minimal number of low-complexity features necessary for representing the data. Experiments show that unlike codes obtained with standard autoencoders, lococodes are based on feature detectors, never unstructured, usually sparse, and sometimes factorial or local (depending on statistical properties of the data). Although LOCOCODE is not explicitly designed to enforce sparse or factorial codes, it extracts optimal codes for difficult versions of the "bars" benchmark problem, whereas independent component analysis (ICA) and principal component analysis (PCA) do not. It produces familiar, biologically plausible feature detectors when applied to real-world images and codes with fewer bits per pixel than ICA and PCA. Unlike ICA, it does not need to know the number of independent sources. As a preprocessor for a vowel recognition benchmark problem, it sets the stage for excellent classification performance. Our results reveal an interesting, previously ignored connection between two important fields: regularizer research and ICA-related research. They may represent a first step toward unification of regularization and unsupervised learning.

  6. Adding articulatory features to acoustic features for automatic speech recognition

    SciTech Connect

    Zlokarnik, I.

    1995-05-01

    A hidden-Markov-model (HMM) based speech recognition system was evaluated that makes use of simultaneously recorded acoustic and articulatory data. The articulatory measurements were gathered by means of electromagnetic articulography and describe the movement of small coils fixed to the speakers` tongue and jaw during the production of German V{sub 1}CV{sub 2} sequences [P. Hoole and S. Gfoerer, J. Acoust. Soc. Am. Suppl. 1 {bold 87}, S123 (1990)]. Using the coordinates of the coil positions as an articulatory representation, acoustic and articulatory features were combined to make up an acoustic--articulatory feature vector. The discriminant power of this combined representation was evaluated for two subjects on a speaker-dependent isolated word recognition task. When the articulatory measurements were used both for training and testing the HMMs, the articulatory representation was capable of reducing the error rate of comparable acoustic-based HMMs by a relative percentage of more than 60%. In a separate experiment, the articulatory movements during the testing phase were estimated using a multilayer perceptron that performed an acoustic-to-articulatory mapping. Under these more realistic conditions, when articulatory measurements are only available during the training, the error rate could be reduced by a relative percentage of 18% to 25%.

  7. The acoustic features of human laughter

    NASA Astrophysics Data System (ADS)

    Bachorowski, Jo-Anne; Owren, Michael J.

    2002-05-01

    Remarkably little is known about the acoustic features of laughter, despite laughter's ubiquitous role in human vocal communication. Outcomes are described for 1024 naturally produced laugh bouts recorded from 97 young adults. Acoustic analysis focused on temporal characteristics, production modes, source- and filter-related effects, and indexical cues to laugher sex and individual identity. The results indicate that laughter is a remarkably complex vocal signal, with evident diversity in both production modes and fundamental frequency characteristics. Also of interest was finding a consistent lack of articulation effects in supralaryngeal filtering. Outcomes are compared to previously advanced hypotheses and conjectures about this species-typical vocal signal.

  8. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  9. Acoustic sensor array extracts physiology during movement

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2001-08-01

    An acoustic sensor attached to a person's neck can extract heart and breath sounds, as well as voice and other physiology related to their health and performance. Soldiers, firefighters, law enforcement, and rescue personnel, as well as people at home or in health care facilities, can benefit form being remotely monitored. ARLs acoustic sensor, when worn around a person's neck, picks up the carotid artery and breath sounds very well by matching the sensor's acoustic impedance to that of the body via a gel pad, while airborne noise is minimized by an impedance mismatch. Although the physiological sounds have high SNR, the acoustic sensor also responds to motion-induced artifacts that obscure the meaningful physiology. To exacerbate signal extraction, these interfering signals are usually covariant with the heart sounds, in that as a person walks faster the heart tends to beat faster, and motion noises tend to contain low frequency component similar to the heart sounds. A noise-canceling configuration developed by ARL uses two acoustic sensor on the front sides of the neck as physiology sensors, and two additional acoustic sensor on the back sides of the neck as noise references. Breath and heart sounds, which occur with near symmetry and simultaneously at the two front sensor, will correlate well. The motion noise present on all four sensor will be used to cancel the noise on the two physiology sensors. This report will compare heart rate variability derived from both the acoustic array and from ECG data taken simultaneously on a treadmill test. Acoustically derived breath rate and volume approximations will be introduced as well. A miniature 3- axis accelerometer on the same neckband provides additional noise references to validate footfall and motion activity.

  10. Extracting changes in air temperature using acoustic coda phase delays.

    PubMed

    Marcillo, Omar; Arrowsmith, Stephen; Whitaker, Rod; Morton, Emily; Scott Phillips, W

    2014-10-01

    Blast waves produced by 60 high-explosive detonations were recorded at short distances (few hundreds of meters); the corresponding waveforms show charge-configuration independent coda-like features (i.e., similar shapes, amplitudes, and phases) lasting several seconds. These features are modeled as reflected and/or scattered waves by acoustic reflectors/scatters surrounding the explosions. Using explosion pairs, relative coda phase delays are extracted and modeled as changes in sound speed due to changes in air temperature. Measurements from nearby weather towers are used for validation. PMID:25324115

  11. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  12. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  13. Feature Extraction Based on Decision Boundaries

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  14. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  15. Guidance in feature extraction to resolve uncertainty

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris; Kovalerchuk, Michael; Streltsov, Simon; Best, Matthew

    2013-05-01

    Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the "complexity trap". This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.

  16. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  17. ECG Feature Extraction using Time Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Nair, Mahesh A.

    The proposed algorithm is a novel method for the feature extraction of ECG beats based on Wavelet Transforms. A combination of two well-accepted methods, Pan Tompkins algorithm and Wavelet decomposition, this system is implemented with the help of MATLAB. The focus of this work is to implement the algorithm, which can extract the features of ECG beats with high accuracy. The performance of this system is evaluated in a pilot study using the MIT-BIH Arrhythmia database.

  18. Error margin analysis for feature gene extraction

    PubMed Central

    2010-01-01

    Background Feature gene extraction is a fundamental issue in microarray-based biomarker discovery. It is normally treated as an optimization problem of finding the best predictive feature genes that can effectively and stably discriminate distinct types of disease conditions, e.g. tumors and normals. Since gene microarray data normally involves thousands of genes at, tens or hundreds of samples, the gene extraction process may fall into local optimums if the gene set is optimized according to the maximization of classification accuracy of the classifier built from it. Results In this paper, we propose a novel gene extraction method of error margin analysis to optimize the feature genes. The proposed algorithm has been tested upon one synthetic dataset and two real microarray datasets. Meanwhile, it has been compared with five existing gene extraction algorithms on each dataset. On the synthetic dataset, the results show that the feature set extracted by our algorithm is the closest to the actual gene set. For the two real datasets, our algorithm is superior in terms of balancing the size and the validation accuracy of the resultant gene set when comparing to other algorithms. Conclusion Because of its distinct features, error margin analysis method can stably extract the relevant feature genes from microarray data for high-performance classification. PMID:20459827

  19. Facial Feature Extraction Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Viet

    Facial feature extraction is one of the most important processes in face recognition, expression recognition and face detection. The aims of facial feature extraction are eye location, shape of eyes, eye brow, mouth, head boundary, face boundary, chin and so on. The purpose of this paper is to develop an automatic facial feature extraction system, which is able to identify the eye location, the detailed shape of eyes and mouth, chin and inner boundary from facial images. This system not only extracts the location information of the eyes, but also estimates four important points in each eye, which helps us to rebuild the eye shape. To model mouth shape, mouth extraction gives us both mouth location and two corners of mouth, top and bottom lips. From inner boundary we obtain and chin, we have face boundary. Based on wavelet features, we can reduce the noise from the input image and detect edge information. In order to extract eyes, mouth, inner boundary, we combine wavelet features and facial character to design these algorithms for finding midpoint, eye's coordinates, four important eye's points, mouth's coordinates, four important mouth's points, chin coordinate and then inner boundary. The developed system is tested on Yale Faces and Pedagogy student's faces.

  20. Voice-over: perceptual and acoustic analysis of vocal features.

    PubMed

    Medrado, Reny; Ferreira, Leslie Piccolotto; Behlau, Mara

    2005-09-01

    Voice-overs are professional voice users who use their voices to market products in the electronic media. The purposes of this study were to (1) analyze voice-overed and non-overed productions of an advertising text in two groups consisting of 10 male professional voice-overs and 10 male non-voice-overs; and (2) determine specific acoustic features of voice-over productions in both groups. A naïve group of listeners were engaged for the perceptual analysis of the recorded advertising text. The voice-overed production samples from both groups were submitted for analysis of acoustic and temporal features. The following parameters were analyzed: (1) the total text length, (2) the length of the three emphatic pauses, (3) values of the mean, (4) minimum, (5) maximum fundamental frequency, and (6) the semitone range. The majority of voice-overs and non-voice-overs were correctly identified by the listeners in both productions. However voice-overs were more consistently correctly identified than non-voice-overs. The total text length was greater for voice-overs. The pause time distribution was statistically more homogeneous for the voice-overs. The acoustic analysis indicated that the voice-overs had lower values of mean, minimum, and maximum fundamental frequency and a greater range of semitones. The voice-overs carry the voice-overed production features to their non-voice-overed production. PMID:16102662

  1. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  2. Acoustic feature recognition in the dogbane tiger moth, Cycnia tenera.

    PubMed

    Fullard, James H; Ratcliffe, John M; Christie, Christopher G

    2007-07-01

    Certain tiger moths (Arctiidae) defend themselves against bats by phonoresponding to their echolocation calls with trains of ultrasonic clicks. The dogbane tiger moth, Cycnia tenera, preferentially phonoresponds to the calls produced by attacking versus searching bats, suggesting that it either recognizes some acoustic feature of this phase of the bat's echolocation calls or that it simply reacts to their increased power as the bat closes. Here, we used a habituation/generalization paradigm to demonstrate that C. tenera responds neither to the shift in echolocation call frequencies nor to the change in pulse duration that is exhibited during the bat's attack phase unless these changes are accompanied by either an increase in duty cycle or a decrease in pulse period. To separate these features, we measured the moth's phonoresponse thresholds to pulsed stimuli with variable versus constant duty cycles and demonstrate that C. tenera is most sensitive to echolocation call periods expressed by an attacking bat. We suggest that, under natural conditions, C. tenera identifies an attacking bat by recognizing the pulse period of its echolocation calls but that this feature recognition is influenced by acoustic power and can be overridden by unnaturally intense sounds.

  3. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  4. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  5. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  6. Online Feature Extraction Algorithms for Data Streams

    NASA Astrophysics Data System (ADS)

    Ozawa, Seiichi

    Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.

  7. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  8. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  9. Observed features of acoustic gravity waves in the heterosphere

    NASA Astrophysics Data System (ADS)

    Fedorenko, A. K.; Kryuchkov, E. I.

    2014-01-01

    According to measurements on the Dynamic Explorer 2 satellite, features of the propagation of acoustic gravity waves (AGWs) in the multicomponent upper atmosphere have been investigated. In the altitude range 250-400 km in wave concentration variations of some atmospheric gases, amplitude and phase differences have been observed. Using the approach proposed in this paper, in different gases, AGW variations have been divided into components associated with elastic compression, adiabatic expansion, and the vertical background distribution. The amplitude and phase differences observed in different gases are explained on the basis of analyzing these components. It is shown how to use this effect in order to determine the wave propagation, the vertical displacement of the volume element, the wave frequency, and the spatial distribution of the wave energy density.

  10. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  11. Correlation metric for generalized feature extraction.

    PubMed

    Fu, Yun; Yan, Shuicheng; Huang, Thomas S

    2008-12-01

    Beyond conventional linear and kernel-based feature extraction, we propose in this paper the generalized feature extraction formulation based on the so-called Graph Embedding framework. Two novel correlation metric based algorithms are presented based on this formulation. Correlation Embedding Analysis (CEA), which incorporates both correlational mapping and discriminating analysis, boosts the discriminating power by mapping data from a high-dimensional hypersphere onto another low-dimensional hypersphere and preserving the intrinsic neighbor relations with local graph modeling. Correlational Principal Component Analysis (CPCA) generalizes the conventional Principal Component Analysis (PCA) algorithm to the case with data distributed on a high-dimensional hypersphere. Their advantages stem from two facts: 1) tailored to normalized data, which are often the outputs from the data preprocessing step, and 2) directly designed with correlation metric, which shows to be generally better than Euclidean distance for classification purpose. Extensive comparisons with existing algorithms on visual classification experiments demonstrate the effectiveness of the proposed algorithms. PMID:18988954

  12. Flow and acoustic features of a supersonic tapered nozzle

    NASA Astrophysics Data System (ADS)

    Gutmark, E.; Bowman, H. L.; Schadow, K. C.

    1992-05-01

    The acoustic and flow characteristics of a supersonic tapered jet were measured for free and shrouded flow configurations. Measurements were performed for a full range of pressure ratios including over- and underexpanded and design conditions. The supersonic tapered jet is issued from a converging-diverging nozzle with a 3∶1 rectangular slotted throat and a conical diverging section leading to a circular exit. The jet was compared to circular and rectangular supersonic jets operating at identical conditions. The distinct feature of the jet is the absence of screech tones in the entire range of operation. Its near-field pressure fluctuations have a wide band spectrum in the entire range of measurements, for Mach numbers of 1 to 2.5, for over- and underexpanded conditions. The free jet's spreading rate is nearly constant and similar to the rectangular jet, and in a shroud, the pressure drop it is inducing is linearly proportional to the primary jet Mach number. This behavior persisted in high adverse pressure gradients at overexpanded conditions, and with nozzle divergence angles of up to 35°, no inside flow separation was observed.

  13. Classification trees with neural network feature extraction.

    PubMed

    Guo, H; Gelfand, S B

    1992-01-01

    The ideal use of small multilayer nets at the decision nodes of a binary classification tree to extract nonlinear features is proposed. The nets are trained and the tree is grown using a gradient-type learning algorithm in the multiclass case. The method improves on standard classification tree design methods in that it generally produces trees with lower error rates and fewer nodes. It also reduces the problems associated with training large unstructured nets and transfers the problem of selecting the size of the net to the simpler problem of finding a tree of the right size. An efficient tree pruning algorithm is proposed for this purpose. Trees constructed with the method and the CART method are compared on a waveform recognition problem and a handwritten character recognition problem. The approach demonstrates significant decrease in error rate and tree size. It also yields comparable error rates and shorter training times than a large multilayer net trained with backpropagation on the same problems.

  14. Waveform feature extraction based on tauberian approximation.

    PubMed

    De Figueiredo, R J; Hu, C L

    1982-02-01

    A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ¿M i = 1 aix(t - ¿i), where the coefficients ai and the delays ¿i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ¿i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ¿i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.

  15. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  16. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    NASA Astrophysics Data System (ADS)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  17. Normal mode extraction and environmental inversion from underwater acoustic data

    NASA Astrophysics Data System (ADS)

    Neilsen, Tracianne Beesley

    2000-11-01

    The normal modes of acoustic propagation in the shallow ocean are extracted from sound recorded on a vertical line array (VLA) of hydrophones as a source travels nearby, and the extracted modes are used to invert for the environmental properties of the ocean. The mode extraction is accomplished by performing a singular value decomposition (SVD) of individual frequency components of the signal's temporally-averaged, spatial cross-spectral density matrix. The SVD produces a matrix containing a mutually orthogonal set of basis functions, which are proportional to the depth-dependent normal modes, and a diagonal matrix containing the singular values, which are proportional to the modal source excitations and mode eigenvalues. The extracted modes exist in the ocean at the time the signal is recorded and thus may be used to estimate the sound speed profile and bottom properties. The inversion scheme iteratively refines the environmental parameters using a Levenberg-Marquardt algorithm such that the modeled modes approach the data- extracted modes Simulations are performed to examine the robustness and practicality of the mode extraction and inversion techniques. Experimental data measured in the Hudson Canyon Area of the New Jersey Shelf are analyzed, and modes are successfully extracted at the frequencies of a towed source. Modes are also extracted from ambient noise recorded on the VLA during the experiment. Using data-extracted modes, inverted values of the water depth, the thickness of a thin first sediment layer, and the compressional sound speed at the top of the first layer are found to be in good agreement with historical values. The density, attenuation, and properties of the second layer are not well determined because the inversion method is only able to obtain reliable values for the parameters that influence the mode shapes in the water column.

  18. Gunshot acoustic signature specific features and false alarms reduction

    NASA Astrophysics Data System (ADS)

    Donzier, Alain; Millet, Joel

    2005-05-01

    This paper provides a detailed analysis of the most specific parameters of gunshot signatures through models as well as through real data. The models for the different contributions to gunshot typical signature (shock and muzzle blast) are presented and used to discuss the variation of measured signatures over the different environmental conditions and shot configurations. The analysis is followed by a description of the performance requirements for gunshot detection systems, from sniper detection that was the main concern 10 years ago, to the new and more challenging conditions faced in today operations. The work presented examines the process of how systems are deployed and used as well as how the operational environment has changed. The main sources of false alarms and new threats such as RPGs and mortars that acoustic gunshot detection systems have to face today are also defined and discussed. Finally, different strategies for reducing false alarms are proposed based on the acoustic signatures. Different strategies are presented through various examples of specific missions ranging from vehicle protection to area protection. These strategies not only include recommendation on how to handle acoustic information for the best efficiency of the acoustic detector but also recommends some add-on sensors to enhance system overall performance.

  19. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  20. A gearbox fault diagnosis scheme based on near-field acoustic holography and spatial distribution features of sound field

    NASA Astrophysics Data System (ADS)

    Lu, Wenbo; Jiang, Weikang; Yuan, Guoqing; Yan, Li

    2013-05-01

    Vibration signal analysis is the main technique in machine condition monitoring or fault diagnosis, whereas in some cases vibration-based diagnosis is restrained because of its contact measurement. Acoustic-based diagnosis (ABD) with non-contact measurement has received little attention, although sound field may contain abundant information related to fault pattern. A new scheme of ABD for gearbox based on near-field acoustic holography (NAH) and spatial distribution features of sound field is presented in this paper. It focuses on applying distribution information of sound field to gearbox fault diagnosis. A two-stage industrial helical gearbox is experimentally studied in a semi-anechoic chamber and a lab workshop, respectively. Firstly, multi-class faults (mild pitting, moderate pitting, severe pitting and tooth breakage) are simulated, respectively. Secondly, sound fields and corresponding acoustic images in different gearbox running conditions are obtained by fast Fourier transform (FFT) based NAH. Thirdly, by introducing texture analysis to fault diagnosis, spatial distribution features are extracted from acoustic images for capturing fault patterns underlying the sound field. Finally, the features are fed into multi-class support vector machine for fault pattern identification. The feasibility and effectiveness of our proposed scheme is demonstrated on the good experimental results and the comparison with traditional ABD method. Even with strong noise interference, spatial distribution features of sound field can reliably reveal the fault patterns of gearbox, and thus the satisfactory accuracy can be obtained. The combination of histogram features and gray level gradient co-occurrence matrix features is suggested for good diagnosis accuracy and low time cost.

  1. Analysis of Acoustic Features in Speakers with Cognitive Disorders and Speech Impairments

    NASA Astrophysics Data System (ADS)

    Saz, Oscar; Simón, Javier; Rodríguez, W. Ricardo; Lleida, Eduardo; Vaquero, Carlos

    2009-12-01

    This work presents the results in the analysis of the acoustic features (formants and the three suprasegmental features: tone, intensity and duration) of the vowel production in a group of 14 young speakers suffering different kinds of speech impairments due to physical and cognitive disorders. A corpus with unimpaired children's speech is used to determine the reference values for these features in speakers without any kind of speech impairment within the same domain of the impaired speakers; this is 57 isolated words. The signal processing to extract the formant and pitch values is based on a Linear Prediction Coefficients (LPCs) analysis of the segments considered as vowels in a Hidden Markov Model (HMM) based Viterbi forced alignment. Intensity and duration are also based in the outcome of the automated segmentation. As main conclusion of the work, it is shown that intelligibility of the vowel production is lowered in impaired speakers even when the vowel is perceived as correct by human labelers. The decrease in intelligibility is due to a 30% of increase in confusability in the formants map, a reduction of 50% in the discriminative power in energy between stressed and unstressed vowels and to a 50% increase of the standard deviation in the length of the vowels. On the other hand, impaired speakers keep good control of tone in the production of stressed and unstressed vowels.

  2. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  3. Munitions related feature extraction from LIDAR data.

    SciTech Connect

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniques for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.

  4. Feature extraction from Doppler ultrasound signals for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-11-01

    This paper presented the assessment of feature extraction methods used in automated diagnosis of arterial diseases. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Different feature extraction methods were used to obtain feature vectors from ophthalmic and internal carotid arterial Doppler signals. In addition to this, the problem of selecting relevant features among the features available for the purpose of classification of Doppler signals was dealt with. Multilayer perceptron neural networks (MLPNNs) with different inputs (feature vectors) were used for diagnosis of ophthalmic and internal carotid arterial diseases. The assessment of feature extraction methods was performed by taking into consideration of performances of the MLPNNs. The performances of the MLPNNs were evaluated by the convergence rates (number of training epochs) and the total classification accuracies. Finally, some conclusions were drawn concerning the efficiency of discrete wavelet transform as a feature extraction method used for the diagnosis of ophthalmic and internal carotid arterial diseases. PMID:16278106

  5. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA. PMID:18583731

  6. Direct extraction of topographic features from gray scale haracter images

    SciTech Connect

    Seong-Whan Lee; Young Joon Kim

    1994-12-31

    Optical character recognition (OCR) traditionally applies to binary-valued imagery although text is always scanned and stored in gray scale. However, binarization of multivalued image may remove important topological information from characters and introduce noise to character background. In order to avoid this problem, it is indispensable to develop a method which can minimize the information loss due to binarization by extracting features directly from gray scale character images. In this paper, we propose a new method for the direct extraction of topographic features from gray scale character images. By comparing the proposed method with the Wang and Pavlidis`s method we realized that the proposed method enhanced the performance of topographic feature extraction by computing the directions of principal curvature efficiently and prevented the extraction of unnecessary features. We also show that the proposed method is very effective for gray scale skeletonization compared to Levi and Montanari`s method.

  7. Extracting the Green's function between receivers using underwater acoustic noise

    NASA Astrophysics Data System (ADS)

    Roux, Philippe; Lynch, Steve; Kuperman, W. A.

    2002-11-01

    Recent experimental and theoretical works in ultrasonics show that the Green's function between transducers fastened to an aluminum sample can be measured from the correlation of thermal noise [R. L. Weaver and O. J. Lobkis, ''Ultrasonics without a source. Thermal fluctuation correlations at MHz frequencies,'' Phys. Rev. Lett. 87, 134301 (2001)]. Similar results have been obtained in geophysics using seismic noise data [A. Paul and M. Campillo, ''Extracting the Green's function between two stations from coda waves,'' Trans. Am. Geophys. Union 82-47, F842 (2001)]. Sources of noise in underwater acoustics range from ship noise at low frequency to surface noise and even thermal noise at very high frequencies. We theoretically demonstrate that at least an approximate Green's function can be obtained from surface noise. This result is confirmed by noise data recorded on arrays of receivers during the NPAL98 experiment. [Work supported by ONR.] a)The NPAL group is composed of J. A. Colosi, B. D. Cornuelle, B. D. Dushaw, M. A. Dzieciuch, B. M. Howe, J. A. Mercer, R. C. Spindel, and P. F. Worcester.

  8. Model Based Analysis of Face Images for Facial Feature Extraction

    NASA Astrophysics Data System (ADS)

    Riaz, Zahid; Mayer, Christoph; Beetz, Michael; Radig, Bernd

    This paper describes a comprehensive approach to extract a common feature set from the image sequences. We use simple features which are easily extracted from a 3D wireframe model and efficiently used for different applications on a benchmark database. Features verstality is experimented on facial expressions recognition, face reognition and gender classification. We experiment different combinations of the features and find reasonable results with a combined features approach which contain structural, textural and temporal variations. The idea follows in fitting a model to human face images and extracting shape and texture information. We parametrize these extracted information from the image sequences using active appearance model (AAM) approach. We further compute temporal parameters using optical flow to consider local feature variations. Finally we combine these parameters to form a feature vector for all the images in our database. These features are then experimented with binary decision tree (BDT) and Bayesian Network (BN) for classification. We evaluated our results on image sequences of Cohn Kanade Facial Expression Database (CKFED). The proposed system produced very promising recognition rates for our applications with same set of features and classifiers. The system is also realtime capable and automatic.

  9. A Harmonic Linear Dynamical System for Prominent ECG Feature Extraction

    PubMed Central

    Nguyen Thi, Ngoc Anh; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series. PMID:24719648

  10. A harmonic linear dynamical system for prominent ECG feature extraction.

    PubMed

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  11. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  12. Fatigue features study on the crankshaft material of 42CrMo steel using acoustic emission

    NASA Astrophysics Data System (ADS)

    Shi, Yue; Dong, Lihong; Wang, Haidou; Li, Guolu; Liu, Shenshui

    2016-09-01

    Crankshaft is regarded as an important component of engines, and it is an important application of remanufacturing because of its high added value. However, the fatigue failure research of remanufactured crankshaft is still in its primary stage. Thus, monitoring and investigating the fatigue failure of the remanufacturing crankshaft is crucial. In this paper, acoustic emission (AE) technology and machine vision are used to monitor the four-point bending fatigue of 42CrMo, which is the material of crankshaft. The specimens are divided into two categories, namely, pre-existing crack and non-preexisting crack, which simulate the crankshaft and crankshaft blank, respectively. The analysis methods of parameter-based AE techniques, wavelet transform (WT) and SEM analysis are combined to identify the stage of fatigue failure. The stage of fatigue failure is the basis of using AE technology in the field of remanufacturing crankshafts. The experiment results show that the fatigue crack propagation style is a transgranular fracture and the fracture is a brittle fracture. The difference mainly depends on the form of crack initiation. Various AE signals are detected by parameter analysis method. Wavelet threshold denoising and WT are combined to extract the spectral features of AE signals at different fatigue failure stages.

  13. Acoustic features of normal-hearing pre-term infant cry.

    PubMed

    Cacace, A T; Robb, M P; Saxman, J H; Risemberg, H; Koltai, P

    1995-11-01

    Acoustic features of expiratory cry vocalizations were studied in 125 pre-term infants prior to being discharged from a level-3 neonatal intensive care unit. The purpose was to describe various phonatory behaviors in infants in whom significant hearing loss could be ruled out. We also compared these results with normal-hearing full-term infants, and evaluated whether linkage exists among acoustic cry features and various anthropometric, diagnostic and treatment variables obtained throughout the peri- and neonatal periods. Our analysis revealed that cry duration was significantly related to total days receiving respiratory assistance. The occurrence of other complex spectral and temporal aspects of acoustic cry vocalizations including harmonic doubling and vibrato also increased in infants receiving some form of respiratory assistance. The presence of harmonic doubling also depended on weight and conceptional age at test. The discussion focuses on the implication of these relationships and directions for future research. PMID:8557478

  14. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-08-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance. PMID:26737209

  15. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  16. Image feature meaning for automatic key-frame extraction

    NASA Astrophysics Data System (ADS)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  17. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  18. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  19. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  20. Primary Progressive Apraxia of Speech: Clinical Features and Acoustic and Neurologic Correlates

    PubMed Central

    Strand, Edythe A.; Clark, Heather; Machulda, Mary; Whitwell, Jennifer L.; Josephs, Keith A.

    2015-01-01

    Purpose This study summarizes 2 illustrative cases of a neurodegenerative speech disorder, primary progressive apraxia of speech (AOS), as a vehicle for providing an overview of the disorder and an approach to describing and quantifying its perceptual features and some of its temporal acoustic attributes. Method Two individuals with primary progressive AOS underwent speech-language and neurologic evaluations on 2 occasions, ranging from 2.0 to 7.5 years postonset. Performance on several tests, tasks, and rating scales, as well as several acoustic measures, were compared over time within and between cases. Acoustic measures were compared with performance of control speakers. Results Both patients initially presented with AOS as the only or predominant sign of disease and without aphasia or dysarthria. The presenting features and temporal progression were captured in an AOS Rating Scale, an Articulation Error Score, and temporal acoustic measures of utterance duration, syllable rates per second, rates of speechlike alternating motion and sequential motion, and a pairwise variability index measure. Conclusions AOS can be the predominant manifestation of neurodegenerative disease. Clinical ratings of its attributes and acoustic measures of some of its temporal characteristics can support its diagnosis and help quantify its salient characteristics and progression over time. PMID:25654422

  1. Fast SIFT design for real-time visual feature extraction.

    PubMed

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz. PMID:23743775

  2. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  3. Alarming features: birds use specific acoustic properties to identify heterospecific alarm calls

    PubMed Central

    Fallow, Pamela M.; Pitcher, Benjamin J.; Magrath, Robert D.

    2013-01-01

    Vertebrates that eavesdrop on heterospecific alarm calls must distinguish alarms from sounds that can safely be ignored, but the mechanisms for identifying heterospecific alarm calls are poorly understood. While vertebrates learn to identify heterospecific alarms through experience, some can also respond to unfamiliar alarm calls that are acoustically similar to conspecific alarm calls. We used synthetic calls to test the role of specific acoustic properties in alarm call identification by superb fairy-wrens, Malurus cyaneus. Individuals fled more often in response to synthetic calls with peak frequencies closer to those of conspecific calls, even if other acoustic features were dissimilar to that of fairy-wren calls. Further, they then spent more time in cover following calls that had both peak frequencies and frequency modulation rates closer to natural fairy-wren means. Thus, fairy-wrens use similarity in specific acoustic properties to identify alarms and adjust a two-stage antipredator response. Our study reveals how birds respond to heterospecific alarm calls without experience, and, together with previous work using playback of natural calls, shows that both acoustic similarity and learning are important for interspecific eavesdropping. More generally, this study reconciles contrasting views on the importance of alarm signal structure and learning in recognition of heterospecific alarms. PMID:23303539

  4. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  5. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  6. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  7. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  8. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  9. PCA feature extraction for change detection in multidimensional unlabeled data.

    PubMed

    Kuncheva, Ludmila I; Faithfull, William J

    2014-01-01

    When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.

  10. Features extraction in anterior and posterior cruciate ligaments analysis.

    PubMed

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK).

  11. Genetic programming approach to extracting features from remotely sensed imagery

    SciTech Connect

    Theiler, J. P.; Perkins, S. J.; Harvey, N. R.; Szymanski, J. J.; Brumby, Steven P.

    2001-01-01

    Multi-instrument data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problems of spatial co-registration, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. We describe a genetic programming/supervised classifier software system, called Genie, which evolves and combines spatio-spectral image processing tools for remotely sensed imagery. We describe our representation of candidate image processing pipelines, and discuss our set of primitive image operators. Our primary application has been in the field of geospatial feature extraction, including wildfire scars and general land-cover classes, using publicly available multi-spectral imagery (MSI) and hyper-spectral imagery (HSI). Here, we demonstrate our system on Landsat 7 Enhanced Thematic Mapper (ETM+) MSI. We exhibit an evolved pipeline, and discuss its operation and performance.

  12. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    PubMed

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01).

  13. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  14. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  15. Acoustic features of male baboon loud calls: Influences of context, age, and individuality

    NASA Astrophysics Data System (ADS)

    Fischer, Julia; Hammerschmidt, Kurt; Cheney, Dorothy L.; Seyfarth, Robert M.

    2002-03-01

    The acoustic structure of loud calls (``wahoos'') recorded from free-ranging male baboons (Papio cynocephalus ursinus) in the Moremi Game Reserve, Botswana, was examined for differences between and within contexts, using calls given in response to predators (alarm wahoos), during male contests (contest wahoos), and when a male had become separated from the group (contact wahoos). Calls were recorded from adolescent, subadult, and adult males. In addition, male alarm calls were compared with those recorded from females. Despite their superficial acoustic similarity, the analysis revealed a number of significant differences between alarm, contest, and contact wahoos. Contest wahoos are given at a much higher rate, exhibit lower frequency characteristics, have a longer ``hoo'' duration, and a relatively louder ``hoo'' portion than alarm wahoos. Contact wahoos are acoustically similar to contest wahoos, but are given at a much lower rate. Both alarm and contest wahoos also exhibit significant differences among individuals. Some of the acoustic features that vary in relation to age and sex presumably reflect differences in body size, whereas others are possibly related to male stamina and endurance. The finding that calls serving markedly different functions constitute variants of the same general call type suggests that the vocal production in nonhuman primates is evolutionarily constrained.

  16. Do acoustic features of lion, Panthera leo, roars reflect sex and male condition?

    PubMed

    Pfefferle, Dana; West, Peyton M; Grinnell, Jon; Packer, Craig; Fischer, Julia

    2007-06-01

    Long distance calls function to regulate intergroup spacing, attract mating partners, and/or repel competitors. Therefore, they may not only provide information about the sex (if both sexes are calling) but also about the condition of the caller. This paper provides a description of the acoustic features of roars recorded from 18 male and 6 female lions (Panthera leo) living in the Serengeti National park, Tanzania. After analyzing whether these roars differ between the sexes, tests whether male roars may function as indicators of their fighting ability or condition were conducted. Therefore, call characteristics were tested for relation to anatomical features as size, mane color, or mane length. Call characteristics included acoustic parameters that previously had been implied as indicators of size and fighting ability, e.g., call length, fundamental frequency, and peak frequency. The analysis revealed differences in relation to sex, which were entirely explained by variation in body size. No evidence that acoustic variables were related to male condition was found, indicating that sexual selection might only be a weak force modulating the lion's roar. Instead, lion roars may have mainly been selected to effectively advertise territorial boundaries.

  17. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  18. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  19. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  20. Scanning Acoustic Microscopy for Characterization of Coatings and Near-Surface Features of Ceramics

    SciTech Connect

    Qu, Jun; Blau, Peter Julian

    2006-01-01

    Scanning Acoustic Microscopy (SAcM) has been widely used for non-destructive evaluation (NDE) in various fields such as material characterization, electronics, and biomedicine. SAcM uses high-frequency acoustic waves (60 MHz to 2.0 GHz) providing much higher resolution (up to 0.5 {micro}m) compared to conventional ultrasonic NDE, which is typically about 500 {micro}m. SAcM offers the ability to non-destructively image subsurface features and visualize the variations in elastic properties. These attributes make SAcM a valuable tool for characterizing near-surface material properties and detecting fine-scale flaws. This paper presents some recent applications of SAcM in detecting subsurface damage, assessing coatings, and visualizing residual stress for ceramic and semiconductor materials.

  1. Prenatal features of Pena-Shokeir sequence with atypical response to acoustic stimulation.

    PubMed

    Pittyanont, Sirida; Jatavan, Phudit; Suwansirikul, Songkiat; Tongsong, Theera

    2016-09-01

    A fetal sonographic screening examination performed at 23 weeks showed polyhydramnios, micrognathia, fixed postures of all long bones, but no movement and no breathing. The fetus showed fetal heart rate acceleration but no movement when acoustic stimulation was applied with artificial larynx. All these findings persisted on serial examinations. The neonate was stillborn at 37 weeks and a final diagnosis of Pena-Shokeir sequence was made. In addition to typical sonographic features of Pena-Shokeir sequence, fetal heart rate accelerations with no movement in response to acoustic stimulation suggests that peripheral myopathy may possibly play an important role in the pathogenesis of the disease. © 2016 Wiley Periodicals, Inc. J Clin Ultrasound 44:459-462, 2016. PMID:27312123

  2. Eddy current pulsed phase thermography and feature extraction

    NASA Astrophysics Data System (ADS)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  3. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  4. A Comparison of Signal Enhancement Methods for Extracting Tonal Acoustic Signals

    NASA Technical Reports Server (NTRS)

    Jones, Michael G.

    1998-01-01

    The measurement of pure tone acoustic pressure signals in the presence of masking noise, often generated by mean flow, is a continual problem in the field of passive liner duct acoustics research. In support of the Advanced Subsonic Technology Noise Reduction Program, methods were investigated for conducting measurements of advanced duct liner concepts in harsh, aeroacoustic environments. This report presents the results of a comparison study of three signal extraction methods for acquiring quality acoustic pressure measurements in the presence of broadband noise (used to simulate the effects of mean flow). The performance of each method was compared to a baseline measurement of a pure tone acoustic pressure 3 dB above a uniform, broadband noise background.

  5. Motion feature extraction scheme for content-based video retrieval

    NASA Astrophysics Data System (ADS)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  6. Feature extraction and integration for the quantification of PMFL data

    NASA Astrophysics Data System (ADS)

    Wilson, John W.; Kaba, Muma; Tian, Gui Yun; Licciardi, Steven

    2010-06-01

    If the vast networks of aging iron and steel, oil, gas and water pipelines are to be kept in operation, efficient and accurate pipeline inspection techniques are needed. Magnetic flux leakage (MFL) systems are widely used for ferromagnetic pipeline inspection and although MFL offers reasonable defect detection capabilities, characterisation of defects can be problematic and time consuming. The newly developed pulsed magnetic flux leakage (PMFL) system offers an inspection technique which equals the defect detection capabilities of traditional MFL, but also provides an opportunity to automatically extract defect characterisation information through analysis of the transient sections of the measured signals. In this paper internal and external defects in rolled steel water pipes are examined using PMFL, and feature extraction and integration techniques are explored to both provide defect depth information and to discriminate between internal and external defects. Feature combinations are recommended for defect characterisation and the paper concludes that PMFL can provide enhanced defect characterisation capabilities for flux leakage based inspection systems using feature extraction and integration.

  7. Automatic localization and feature extraction of white blood cells

    NASA Astrophysics Data System (ADS)

    Kovalev, Vassili A.; Grigoriev, Andrei Y.; Ahn, Hyo-Sok; Myshkin, Nickolai K.

    1995-05-01

    The paper presents a method for automatic localization and feature extraction of white blood cells (WBCs) with color images to develop an efficient automated WBC counting system based on image analysis and recognition. Nucleus blobs extraction consists of five steps: (1) nucleus pixel labeling; (2) filtration of nucleus pixel template; (3) segmentation and extraction of nucleus blobs by region growing; (4) removal of uninterested blobs; and (5) marking of external and internal blob border, and holes pixels. The detection of nucleus pixels is based on the intensity of the G image plane and the balance between G and B intensity. Localized nucleus segments are grouped into a cell nucleus by a hierarchic merging procedure in accordance with their area, shapes and conditions of their spatial occurrence. Cytoplasm segmentation based on the pixel intensity and color parameters is found to be unreliable. We overcome this problem by using an edge improving technique. WBC templates are then calculated and additional cell feature sets are constructed for the recognition. Cell feature sets include description of principal geometric and color properties for each type of WBCs. Finally we evaluate the recognition accuracy of the developed algorithm that is proved to be highly reliable and fast.

  8. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  9. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure.

  10. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure. PMID:24197278

  11. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  12. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. PMID:27052618

  13. A flexible data-driven comorbidity feature extraction framework.

    PubMed

    Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-06-01

    Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. PMID:27127895

  14. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  15. Semantic feature extraction for interior environment understanding and retrieval

    NASA Astrophysics Data System (ADS)

    Lei, Zhibin; Liang, Yufeng

    1998-12-01

    In this paper, we propose a novel system of semantic feature extraction and retrieval for interior design and decoration application. The system, V2ID(Virtual Visual Interior Design), uses colored texture and spatial edge layout to obtain simple information about global room environment. We address the domain-specific segmentation problem in our application and present techniques for obtaining semantic features from a room environment. We also discuss heuristics for making use of these features (color, texture, edge layout, and shape), to retrieve objects from an existing database. The final resynthesized room environment, with the original scene and objects from the database, is created for the purpose of animation and virtual walk-through.

  16. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  17. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  18. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    PubMed Central

    Galván-Tejada, Carlos E.; García-Vázquez, Juan Pablo; Brena, Ramon F.

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  19. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    PubMed

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01). PMID:26544602

  20. System and method for investigating sub-surface features of a rock formation with acoustic sources generating coded signals

    SciTech Connect

    Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S

    2014-12-30

    A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.

  1. An Improved Approach of Mesh Segmentation to Extract Feature Regions.

    PubMed

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons.

  2. An Improved Approach of Mesh Segmentation to Extract Feature Regions

    PubMed Central

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons. PMID:26436657

  3. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  4. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  5. Linear unmixing of hyperspectral signals via wavelet feature extraction

    NASA Astrophysics Data System (ADS)

    Li, Jiang

    A pixel in remotely sensed hyperspectral imagery is typically a mixture of multiple electromagnetic radiances from various ground cover materials. Spectral unmixing is a quantitative analysis procedure used to recognize constituent ground cover materials (or endmembers) and obtain their mixing proportions (or abundances) from a mixed pixel. The abundances are typically estimated using the least squares estimation (LSE) method based on the linear mixture model (LMM). This dissertation provides a complete investigation on how the use of appropriate features can improve the LSE of endmember abundances using remotely sensed hyperspectral signals. The dissertation shows how features based on signal classification approaches, such as discrete wavelet transform (DWT), outperform features based on conventional signal representation methods for dimensionality reduction, such as principal component analysis (PCA), for the LSE of endmember abundances. Both experimental and theoretical analyses are reported in the dissertation. A DWT-based linear unmixing system is designed specially for the abundance estimation. The system utilizes the DWT as a pre-processing step for the feature extraction. Based on DWT-based features, the system utilizes the constrained LSE for the abundance estimation. Experimental results show that the use of DWT-based features reduces the abundance estimation deviation by 30--50% on average, as compared to the use of original hyperspectral signals or conventional PCA-based features. Based on the LMM and the LSE method, a series of theoretical analyses are derived to reveal the fundamental reasons why the use of the appropriate features, such as DWT-based features, can improve the LSE of endmember abundances. Under reasonable assumptions, the dissertation derives a generalized mathematical relationship between the abundance estimation error and the endmember separabilty. It is proven that the abundance estimation error can be reduced through increasing

  6. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method. PMID:25868233

  7. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  8. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  9. Most information feature extraction (MIFE) approach for face recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Jiali; Ren, Haibing; Wang, Haitao; Kee, Seokcheol

    2005-03-01

    We present a MIFE (Most Information Feature Extraction) approach, which extract as abundant as possible information for the face classification task. In the MIFE approach, a facial image is separated into sub-regions and each sub-region makes individual"s contribution for performing face recognition. Specifically, each sub-region is subjected to a sub-region based adaptive gamma (SadaGamma) correction or sub-region based histogram equalization (SHE) in order to account for different illuminations and expressions. Experiment results show that the proposed SadaGamma/SHE correction approach provides an efficient delighting solution for face recognition. MIFE and SadaGamma/SHE correction together achieves lower error ratio in face recognition under different illumination and expression.

  10. Extract relevant features from DEM for groundwater potential mapping

    NASA Astrophysics Data System (ADS)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  11. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  12. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  13. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  14. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  15. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  16. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  17. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results.

  18. Automatic feature extraction in neural network noniterative learning

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1997-04-01

    It is proved analytically, whenever the input-output mapping of a one-layered, hard-limited perceptron satisfies a positive, linear independency (PLI) condition, the connection matrix A to meet this mapping can be obtained noniteratively in one step from an algebraic matrix equation containing an N multiplied by M input matrix U. Each column of U is a given standard pattern vector, and there are M standard patterns to be classified. It is also analytically proved that sorting out all nonsingular sub-matrices Uk in U can be used as an automatic feature extraction process in this noniterative-learning system. This paper reports the theoretical derivation and the design and experiments of a superfast-learning, optimally robust, neural network pattern recognition system utilizing this novel feature extraction process. An unedited video movie showing the speed of learning and the robustness in recognition of this novel pattern recognition system is demonstrated in life. Comparison to other neural network pattern recognition systems is discussed.

  19. Wavelet based feature extraction and visualization in hyperspectral tissue characterization

    PubMed Central

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-01-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437

  20. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  1. Extraction of fault component from abnormal sound in diesel engines using acoustic signals

    NASA Astrophysics Data System (ADS)

    Dayong, Ning; Changle, Sun; Yongjun, Gong; Zengmeng, Zhang; Jiaoyi, Hou

    2016-06-01

    In this paper a method for extracting fault components from abnormal acoustic signals and automatically diagnosing diesel engine faults is presented. The method named dislocation superimposed method (DSM) is based on the improved random decrement technique (IRDT), differential function (DF) and correlation analysis (CA). The aim of DSM is to linearly superpose multiple segments of abnormal acoustic signals because of the waveform similarity of faulty components. The method uses sample points at the beginning of time when abnormal sound appears as the starting position for each segment. In this study, the abnormal sound belonged to shocking faulty type; thus, the starting position searching method based on gradient variance was adopted. The coefficient of similar degree between two same sized signals is presented. By comparing with a similar degree, the extracted fault component could be judged automatically. The results show that this method is capable of accurately extracting the fault component from abnormal acoustic signals induced by faulty shocking type and the extracted component can be used to identify the fault type.

  2. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  3. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  4. Comparison of spatial frequency domain features for the detection of side attack explosive ballistics in synthetic aperture acoustics

    NASA Astrophysics Data System (ADS)

    Dowdy, Josh; Anderson, Derek T.; Luke, Robert H.; Ball, John E.; Keller, James M.; Havens, Timothy C.

    2016-05-01

    Explosive hazards in current and former conflict zones are a threat to both military and civilian personnel. As a result, much effort has been dedicated to identifying automated algorithms and systems to detect these threats. However, robust detection is complicated due to factors like the varied composition and anatomy of such hazards. In order to solve this challenge, a number of platforms (vehicle-based, handheld, etc.) and sensors (infrared, ground penetrating radar, acoustics, etc.) are being explored. In this article, we investigate the detection of side attack explosive ballistics via a vehicle-mounted acoustic sensor. In particular, we explore three acoustic features, one in the time domain and two on synthetic aperture acoustic (SAA) beamformed imagery. The idea is to exploit the varying acoustic frequency profile of a target due to its unique geometry and material composition with respect to different viewing angles. The first two features build their angle specific frequency information using a highly constrained subset of the signal data and the last feature builds its frequency profile using all available signal data for a given region of interest (centered on the candidate target location). Performance is assessed in the context of receiver operating characteristic (ROC) curves on cross-validation experiments for data collected at a U.S. Army test site on different days with multiple target types and clutter. Our preliminary results are encouraging and indicate that the top performing feature is the unrolled two dimensional discrete Fourier transform (DFT) of SAA beamformed imagery.

  5. Auditory emotion recognition impairments in Schizophrenia: Relationship to acoustic features and cognition

    PubMed Central

    Gold, Rinat; Butler, Pamela; Revheim, Nadine; Leitman, David; Hansen, John A.; Gur, Ruben; Kantrowitz, Joshua T.; Laukka, Petri; Juslin, Patrik N.; Silipo, Gail S.; Javitt, Daniel C.

    2013-01-01

    Objective Schizophrenia is associated with deficits in ability to perceive emotion based upon tone of voice. The basis for this deficit, however, remains unclear and assessment batteries remain limited. We evaluated performance in schizophrenia on a novel voice emotion recognition battery with well characterized physical features, relative to impairments in more general emotional and cognitive function. Methods We studied in a primary sample of 92 patients relative to 73 controls. Stimuli were characterized according to both intended emotion and physical features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched controls, and 188 general comparison subjects. Results Patients showed significant, large effect size deficits in voice emotion recognition (F=25.4, p<.00001, d=1.1), and were preferentially impaired in recognition of emotion based upon pitch-, but not intensity-features (group X feature interaction: F=7.79, p=.006). Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=56, p<.0001) and within (r=.47, p<.0001) group. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions The present study demonstrates impairments in auditory emotion recognition in schizophrenia relative to acoustic features of underlying stimuli. Furthermore, it provides tools and highlights the need for greater attention to physical features of stimuli used for study of social cognition in neuropsychiatric disorders. PMID:22362394

  6. Transmission line icing prediction based on DWT feature extraction

    NASA Astrophysics Data System (ADS)

    Ma, T. N.; Niu, D. X.; Huang, Y. L.

    2016-08-01

    Transmission line icing prediction is the premise of ensuring the safe operation of the network as well as the very important basis for the prevention of freezing disasters. In order to improve the prediction accuracy of icing, a transmission line icing prediction model based on discrete wavelet transform (DWT) feature extraction was built. In this method, a group of high and low frequency signals were obtained by DWT decomposition, and were fitted and predicted by using partial least squares regression model (PLS) and wavelet least square support vector model (w-LSSVM). Finally, the final result of the icing prediction was obtained by adding the predicted values of the high and low frequency signals. The results showed that the method is effective and feasible in the prediction of transmission line icing.

  7. Exploring bubble oscillation and mass transfer enhancement in acoustic-assisted liquid-liquid extraction with a microfluidic device

    PubMed Central

    Xie, Yuliang; Chindam, Chandraprakash; Nama, Nitesh; Yang, Shikuan; Lu, Mengqian; Zhao, Yanhui; Mai, John D.; Costanzo, Francesco; Huang, Tony Jun

    2015-01-01

    We investigated bubble oscillation and its induced enhancement of mass transfer in a liquid-liquid extraction process with an acoustically-driven, bubble-based microfluidic device. The oscillation of individually trapped bubbles, of known sizes, in microchannels was studied at both a fixed frequency, and over a range of frequencies. Resonant frequencies were analytically identified and were found to be in agreement with the experimental observations. The acoustic streaming induced by the bubble oscillation was identified as the cause of this enhanced extraction. Experiments extracting Rhodanmine B from an aqueous phase (DI water) to an organic phase (1-octanol) were performed to determine the relationship between extraction efficiency and applied acoustic power. The enhanced efficiency in mass transport via these acoustic-energy-assisted processes was confirmed by comparisons against a pure diffusion-based process. PMID:26223474

  8. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    NASA Technical Reports Server (NTRS)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  9. STATISTICAL BASED NON-LINEAR MODEL UPDATING USING FEATURE EXTRACTION

    SciTech Connect

    Schultz, J.F.; Hemez, F.M.

    2000-10-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the classical linear updating paradigm of utilizing the eigen-parameters or FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model, Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric, Finally an investigation of the model to replicate the measured response variation is examined.

  10. Texture features analysis for coastline extraction in remotely sensed images

    NASA Astrophysics Data System (ADS)

    De Laurentiis, Raimondo; Dellepiane, Silvana G.; Bo, Giancarlo

    2002-01-01

    The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.

  11. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements. PMID:25529700

  12. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  13. Extraction of Molecular Features through Exome to Transcriptome Alignment

    PubMed Central

    Mudvari, Prakriti; Kowsari, Kamran; Cole, Charles; Mazumder, Raja; Horvath, Anelia

    2014-01-01

    Integrative Next Generation Sequencing (NGS) DNA and RNA analyses have very recently become feasible, and the published to date studies have discovered critical disease implicated pathways, and diagnostic and therapeutic targets. A growing number of exomes, genomes and transcriptomes from the same individual are quickly accumulating, providing unique venues for mechanistic and regulatory features analysis, and, at the same time, requiring new exploration strategies. In this study, we have integrated variation and expression information of four NGS datasets from the same individual: normal and tumor breast exomes and transcriptomes. Focusing on SNPcentered variant allelic prevalence, we illustrate analytical algorithms that can be applied to extract or validate potential regulatory elements, such as expression or growth advantage, imprinting, loss of heterozygosity (LOH), somatic changes, and RNA editing. In addition, we point to some critical elements that might bias the output and recommend alternative measures to maximize the confidence of findings. The need for such strategies is especially recognized within the growing appreciation of the concept of systems biology: integrative exploration of genome and transcriptome features reveal mechanistic and regulatory insights that reach far beyond linear addition of the individual datasets. PMID:24791251

  14. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  15. Feature extraction for change analysis in SAR time series

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2015-10-01

    In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information

  16. Acoustics

    NASA Technical Reports Server (NTRS)

    Goodman, Jerry R.; Grosveld, Ferdinand

    2007-01-01

    The acoustics environment in space operations is important to maintain at manageable levels so that the crewperson can remain safe, functional, effective, and reasonably comfortable. High acoustic levels can produce temporary or permanent hearing loss, or cause other physiological symptoms such as auditory pain, headaches, discomfort, strain in the vocal cords, or fatigue. Noise is defined as undesirable sound. Excessive noise may result in psychological effects such as irritability, inability to concentrate, decrease in productivity, annoyance, errors in judgment, and distraction. A noisy environment can also result in the inability to sleep, or sleep well. Elevated noise levels can affect the ability to communicate, understand what is being said, hear what is going on in the environment, degrade crew performance and operations, and create habitability concerns. Superfluous noise emissions can also create the inability to hear alarms or other important auditory cues such as an equipment malfunctioning. Recent space flight experience, evaluations of the requirements in crew habitable areas, and lessons learned (Goodman 2003; Allen and Goodman 2003; Pilkinton 2003; Grosveld et al. 2003) show the importance of maintaining an acceptable acoustics environment. This is best accomplished by having a high-quality set of limits/requirements early in the program, the "designing in" of acoustics in the development of hardware and systems, and by monitoring, testing and verifying the levels to ensure that they are acceptable.

  17. Differences in acoustic features of vocalizations produced by killer whales cross-socialized with bottlenose dolphins.

    PubMed

    Musser, Whitney B; Bowles, Ann E; Grebner, Dawn M; Crance, Jessica L

    2014-10-01

    Limited previous evidence suggests that killer whales (Orcinus orca) are capable of vocal production learning. However, vocal contextual learning has not been studied, nor the factors promoting learning. Vocalizations were collected from three killer whales with a history of exposure to bottlenose dolphins (Tursiops truncatus) and compared with data from seven killer whales held with conspecifics and nine bottlenose dolphins. The three whales' repertoires were distinguishable by a higher proportion of click trains and whistles. Time-domain features of click trains were intermediate between those of whales held with conspecifics and dolphins. These differences provided evidence for contextual learning. One killer whale spontaneously learned to produce artificial chirps taught to dolphins; acoustic features fell within the range of inter-individual differences among the dolphins. This whale also produced whistles similar to a stereotyped whistle produced by one dolphin. Thus, results provide further support for vocal production learning and show that killer whales are capable of contextual learning. That killer whales produce similar repertoires when associated with another species suggests substantial vocal plasticity and motivation for vocal conformity with social associates. PMID:25324098

  18. Differences in acoustic features of vocalizations produced by killer whales cross-socialized with bottlenose dolphins.

    PubMed

    Musser, Whitney B; Bowles, Ann E; Grebner, Dawn M; Crance, Jessica L

    2014-10-01

    Limited previous evidence suggests that killer whales (Orcinus orca) are capable of vocal production learning. However, vocal contextual learning has not been studied, nor the factors promoting learning. Vocalizations were collected from three killer whales with a history of exposure to bottlenose dolphins (Tursiops truncatus) and compared with data from seven killer whales held with conspecifics and nine bottlenose dolphins. The three whales' repertoires were distinguishable by a higher proportion of click trains and whistles. Time-domain features of click trains were intermediate between those of whales held with conspecifics and dolphins. These differences provided evidence for contextual learning. One killer whale spontaneously learned to produce artificial chirps taught to dolphins; acoustic features fell within the range of inter-individual differences among the dolphins. This whale also produced whistles similar to a stereotyped whistle produced by one dolphin. Thus, results provide further support for vocal production learning and show that killer whales are capable of contextual learning. That killer whales produce similar repertoires when associated with another species suggests substantial vocal plasticity and motivation for vocal conformity with social associates.

  19. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  20. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  1. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  2. Extraction of text-related features for condensing image documents

    NASA Astrophysics Data System (ADS)

    Bloomberg, Dan S.; Chen, Francine R.

    1996-03-01

    A system has been built that selects excerpts from a scanned document for presentation as a summary, without using character recognition. The method relies on the idea that the most significant sentences in a document contain words that are both specific to the document and have a relatively high frequency of occurrence within it. Accordingly, and entirely within the image domain, each page image is deskewed and the text regions of are found and extracted as a set of textblocks. Blocks with font size near the median for the document are selected and then placed in reading order. The textlines and words are segmented, and the words are placed into equivalence classes of similar shape. The sentences are identified by finding baselines for each line of text and analyzing the size and location of the connected components relative to the baseline. Scores can then be given to each word, depending on its shape and frequency of occurrence, and to each sentence, depending on the scores for the words in the sentence. Other salient features, such as textblocks that have a large font or are likely to contain an abstract, can also be used to select image parts that are likely to be thematically relevant. The method has been applied to a variety of documents, including articles scanned from magazines and technical journals.

  3. Acoustic features contributing to the individuality of wild agile gibbon (Hylobates agilis agilis) songs.

    PubMed

    Oyakawa, Chisako; Koda, Hiroki; Sugiura, Hideki

    2007-07-01

    We examined acoustic individuality in wild agile gibbon Hylobates agilis agilis and determined the acoustic variables that contribute to individual discrimination using multivariate analyses. We recorded 125 female-specific songs (great calls) from six groups in west Sumatra and measured 58 acoustic variables for each great call. We performed principal component analysis to summarize the 58 variables into six acoustic principal components (PCs). Generally, each PC corresponded to a part of the great call. Significant individual differences were found across six individual gibbons in each of the six PCs. Moreover, strong acoustic individuality was found in the introductory and climax parts of the great call. In contrast, the terminal part contributed little to individual identification. Discriminant analysis showed that these PCs contributed to individual discrimination with high repeatability. Although we cannot conclude that agile gibbon use these acoustic components for individual discrimination, they are potential candidates for individual recognition.

  4. [Classification technique for hyperspectral image based on subspace of bands feature extraction and LS-SVM].

    PubMed

    Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian

    2011-05-01

    The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.

  5. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  6. Extraction of acoustic normal mode depth functions using vertical line array data

    NASA Astrophysics Data System (ADS)

    Neilsen, Tracianne B.; Westwood, Evan K.

    2002-02-01

    A method for extracting the normal modes of acoustic propagation in the shallow ocean from sound recorded on a vertical line array (VLA) of hydrophones as a source travels nearby is presented. The mode extraction is accomplished by performing a singular value decomposition (SVD) of individual frequency components of the signal's temporally averaged, spatial cross-spectral density matrix. The SVD produces a matrix containing a mutually orthogonal set of basis functions, which are proportional to the depth-dependent normal modes, and a diagonal matrix containing the singular values, which are proportional to the modal source excitations and mode eigenvalues. The conditions under which the method is expected to work are found to be (1) sufficient depth sampling of the propagating modes by the VLA receivers; (2) sufficient source-VLA range sampling, and (3) sufficient range interval traversed by the source. The mode extraction method is applied to data from the Area Characterization Test II, conducted in September 1993 in the Hudson Canyon Area off the New Jersey coast. Modes are successfully extracted from cw tones recorded while (1) the source traveled along a range-independent track with constant bathymetry and (2) the source traveled up-slope with gradual changes in bathymetry. In addition, modes are successfully extracted at multiple frequencies from ambient noise.

  7. Algorithm for heart rate extraction in a novel wearable acoustic sensor.

    PubMed

    Chen, Guangwei; Imtiaz, Syed Anas; Aguilar-Pelaez, Eduardo; Rodriguez-Villegas, Esther

    2015-02-01

    Phonocardiography is a widely used method of listening to the heart sounds and indicating the presence of cardiac abnormalities. Each heart cycle consists of two major sounds - S1 and S2 - that can be used to determine the heart rate. The conventional method of acoustic signal acquisition involves placing the sound sensor at the chest where this sound is most audible. Presented is a novel algorithm for the detection of S1 and S2 heart sounds and the use of them to extract the heart rate from signals acquired by a small sensor placed at the neck. This algorithm achieves an accuracy of 90.73 and 90.69%, with respect to heart rate value provided by two commercial devices, evaluated on more than 38 h of data acquired from ten different subjects during sleep in a pilot clinical study. This is the largest dataset for acoustic heart sound classification and heart rate extraction in the literature to date. The algorithm in this study used signals from a sensor designed to monitor breathing. This shows that the same sensor and signal can be used to monitor both breathing and heart rate, making it highly useful for long-term wearable vital signs monitoring.

  8. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    PubMed Central

    Wang, Kun-Ching

    2015-01-01

    The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI). In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII). The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD) algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC) and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech. PMID:25594590

  9. Feature edge extraction from 3D triangular meshes using a thinning algorithm

    NASA Astrophysics Data System (ADS)

    Nomura, Masaru; Hamada, Nozomu

    2001-11-01

    Highly detailed geometric models, which are represented as dense triangular meshes are becoming popular in computer graphics. Since such 3D meshes often have huge information, we require some methods to treat them efficiently in the 3D mesh processing such as, surface simplification, subdivision surface, curved surface approximation and morphing. In these applications, we often extract features of 3D meshes such as feature vertices and feature edges in preprocessing step. An automatic extraction method of feature edges is treated in this study. In order to realize the feature edge extraction method, we first introduce the concavity and convexity evaluation value. Then the histogram of the concavity and convexity evaluation value is used to separate the feature edge region. We apply a thinning algorithm, which is used in 2D binary image processing. It is shown that the proposed method can extract appropriate feature edges from 3D meshes.

  10. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  11. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  12. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  13. Comparisom of Wavelet-Based and Hht-Based Feature Extraction Methods for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, X.-M.; Hsu, P.-H.

    2012-07-01

    Hyperspectral images, which contain rich and fine spectral information, can be used to identify surface objects and improve land use/cover classification accuracy. Due to the property of high dimensionality of hyperspectral data, traditional statistics-based classifiers cannot be directly used on such images with limited training samples. This problem is referred as "curse of dimensionality". The commonly used method to solve this problem is dimensionality reduction, and feature extraction is used to reduce the dimensionality of hyperspectral images more frequently. There are two types of feature extraction methods. The first type is based on statistical property of data. The other type is based on time-frequency analysis. In this study, the time-frequency analysis methods are used to extract the features for hyperspectral image classification. Firstly, it has been proven that wavelet-based feature extraction provide an effective tool for spectral feature extraction. On the other hand, Hilbert-Huang transform (HHT), a relative new time-frequency analysis tool, has been widely used in nonlinear and nonstationary data analysis. In this study, wavelet transform and HHT are implemented on the hyperspectral data for physical spectral analysis. Therefore, we can get a small number of salient features, reduce the dimensionality of hyperspectral images and keep the accuracy of classification results. An AVIRIS data set is used to test the performance of the proposed HHT-based feature extraction methods; then, the results are compared with wavelet-based feature extraction. According to the experiment results, HHT-based feature extraction methods are effective tools and the results are similar with wavelet-based feature extraction methods.

  14. COSMIC TRANSPARENCY: A TEST WITH THE BARYON ACOUSTIC FEATURE AND TYPE Ia SUPERNOVAE

    SciTech Connect

    More, Surhud; Hogg, David W.; Bovy, Jo

    2009-05-10

    Conservation of the phase-space density of photons plus Lorentz invariance requires that the cosmological luminosity distance be larger than the angular diameter distance by a factor of (1 + z){sup 2}, where z is the redshift. Because this is a fundamental symmetry, this prediction-known sometimes as the 'Etherington relation' or the 'Tolman test'-is independent of the world model, or even the assumptions of homogeneity and isotropy. It depends, however, on Lorentz invariance and transparency. Transparency can be affected by intergalactic dust or interactions between photons and the dark sector. Baryon acoustic feature (BAF) and type Ia supernovae (SNeIa) measures of the expansion history are differently sensitive to the angular diameter and luminosity distances and can therefore be used in conjunction to limit cosmic transparency. At the present day, the comparison only limits the change {delta}{tau} in the optical depth from redshift 0.20 to 0.35 at visible wavelengths to {delta}{tau} < 0.13 at 95% confidence. In a model with a constant comoving number density n of scatterers of constant proper cross section {sigma}, this limit implies n{sigma} < 2 x 10{sup -4} h Mpc{sup -1}. These limits depend weakly on the cosmological world model. Assuming a concordance world model, the best-fit value of {delta}{tau} to current data is negative at the 2{sigma} level. This could signal interesting new physics or could be the result of unidentified systematics in the BAF/SNeIa measurements. Within the next few years, the limits on transparency could extend to redshifts z {approx} 2.5 and improve to n{sigma} < 1.1 x 10{sup -5} h Mpc{sup -1}. Cosmic variance will eventually limit the sensitivity of any test using the BAF at the n{sigma} {approx} 4 x 10{sup -7} h Mpc{sup -1} level. Comparison with other measures of the transparency is provided; no other measure in the visible is as free of astrophysical assumptions.

  15. PROCESSING OF SCANNED IMAGERY FOR CARTOGRAPHIC FEATURE EXTRACTION.

    USGS Publications Warehouse

    Benjamin, Susan P.; Gaydos, Leonard

    1984-01-01

    Digital cartographic data are usually captured by manually digitizing a map or an interpreted photograph or by automatically scanning a map. Both techniques first require manual photointerpretation to describe features of interest. A new approach, bypassing the laborious photointerpretation phase, is being explored using direct digital image analysis. Aerial photographs are scanned and color separated to create raster data. These are then enhanced and classified using several techniques to identify roads and buildings. Finally, the raster representation of these features is refined and vectorized. 11 refs.

  16. Extraction of terrain features from digital elevation models

    USGS Publications Warehouse

    Price, Curtis V.; Wolock, David M.; Ayers, Mark A.

    1989-01-01

    Digital elevation models (DEMs) are being used to determine variable inputs for hydrologic models in the Delaware River basin. Recently developed software for analysis of DEMs has been applied to watershed and streamline delineation. The results compare favorably with similar delineations taken from topographic maps. Additionally, output from this software has been used to extract other hydrologic information from the DEM, including flow direction, channel location, and an index describing the slope and shape of a watershed.

  17. Investigations of High Pressure Acoustic Waves in Resonators with Seal-Like Features

    NASA Technical Reports Server (NTRS)

    Daniels, Christopher C.; Steinetz, Bruce M.; Finkbeiner, Joshua R.; Li, Xiao-Fan; Raman, Ganesh

    2004-01-01

    1) Standing waves with maximum pressures of 188 kPa have been produced in resonators containing ambient pressure air; 2) Addition of structures inside the resonator shifts the fundamental frequency and decreases the amplitude of the generated pressure waves; 3) Addition of holes to the resonator does reduce the magnitude of the acoustic waves produced, but their addition does not prohibit the generation of large magnitude non-linear standing waves; 4) The feasibility of reducing leakage using non-linear acoustics has been confirmed.

  18. Forest classification using extracted PolSAR features from Compact Polarimetry data

    NASA Astrophysics Data System (ADS)

    Aghabalaei, Amir; Maghsoudi, Yasser; Ebadi, Hamid

    2016-05-01

    This study investigates the ability of extracted Polarimetric Synthetic Aperture RADAR (PolSAR) features from Compact Polarimetry (CP) data for forest classification. The CP is a new mode that is recently proposed in Dual Polarimetry (DP) imaging system. It has several important advantages in comparison with Full Polarimetry (FP) mode such as reduction ability in complexity, cost, mass, data rate of a SAR system. Two strategies are employed for PolSAR feature extraction. In first strategy, the features are extracted using 2 × 2 covariance matrices of CP modes simulated by RADARSAT-2 C-band FP mode. In second strategy, they are extracted using 3 × 3 covariance matrices reconstructed from the CP modes called Pseudo Quad (PQ) modes. In each strategy, the extracted PolSAR features are combined and optimal features are selected by Genetic Algorithm (GA) and then a Support Vector Machine (SVM) classifier is applied. Finally, the results are compared with the FP mode. Results of this study show that the PolSAR features extracted from π / 4 CP mode, as well as combining the PolSAR features extracted from CP or PQ modes provide a better overall accuracy in classification of forest.

  19. Bispectrum-based feature extraction technique for devising a practical brain-computer interface

    NASA Astrophysics Data System (ADS)

    Shahid, Shahjahan; Prasad, Girijesh

    2011-04-01

    The extraction of distinctly separable features from electroencephalogram (EEG) is one of the main challenges in designing a brain-computer interface (BCI). Existing feature extraction techniques for a BCI are mostly developed based on traditional signal processing techniques assuming that the signal is Gaussian and has linear characteristics. But the motor imagery (MI)-related EEG signals are highly non-Gaussian, non-stationary and have nonlinear dynamic characteristics. This paper proposes an advanced, robust but simple feature extraction technique for a MI-related BCI. The technique uses one of the higher order statistics methods, the bispectrum, and extracts the features of nonlinear interactions over several frequency components in MI-related EEG signals. Along with a linear discriminant analysis classifier, the proposed technique has been used to design an MI-based BCI. Three performance measures, classification accuracy, mutual information and Cohen's kappa have been evaluated and compared with a BCI using a contemporary power spectral density-based feature extraction technique. It is observed that the proposed technique extracts nearly recording-session-independent distinct features resulting in significantly much higher and consistent MI task detection accuracy and Cohen's kappa. It is therefore concluded that the bispectrum-based feature extraction is a promising technique for detecting different brain states.

  20. Feature extraction and segmentation in medical images by statistical optimization and point operation approaches

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; King, Philip; Corona, Enrique; Wilson, Mark P.; Aydin, Kaan; Mitra, Sunanda; Soliz, Peter; Nutter, Brian S.; Kwon, Young H.

    2003-05-01

    Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.

  1. Pattern representation in feature extraction and classifier design: matrix versus vector.

    PubMed

    Wang, Zhe; Chen, Songcan; Liu, Jun; Zhang, Daoqiang

    2008-05-01

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern- oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems.

  2. Biosensor method and system based on feature vector extraction

    SciTech Connect

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  3. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  4. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  5. Acoustic and Articulatory Features of Diphthong Production: A Speech Clarity Study

    ERIC Educational Resources Information Center

    Tasko, Stephen M.; Greilick, Kristin

    2010-01-01

    Purpose: The purpose of this study was to evaluate how speaking clearly influences selected acoustic and orofacial kinematic measures associated with diphthong production. Method: Forty-nine speakers, drawn from the University of Wisconsin X-Ray Microbeam Speech Production Database (J. R. Westbury, 1994), served as participants. Samples of clear…

  6. A Model for Extracting Personal Features of an Electroencephalogram and Its Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Fukumi, Minoru

    This paper introduces a model for extracting features of an electroencephalogram (EEG) and a method for evaluating the model. In general, it is known that an EEG contains personal features. However, extraction of these personal features has not been reported. The analyzed frequency components of an EEG can be classified as the components that contain significant number of features and the ones that do not contain any. From the viewpoint of these feature differences, we propose the model for extracting features of the EEG. The model assumes a latent structure and employs factor analysis by considering the model error as personal error. We consider the EEG feature as a first factor loading, which is calculated by eigenvalue decomposition. Furthermore, we use a k-nearest neighbor (kNN) algorithm for evaluating the proposed model and extracted EEG features. In general, the distance metric used is Euclidean distance. We believe that the distance metric used depends on the characteristic of the extracted EEG feature and on the subject. Therefore, depending on the subject, we use one of the three distance metrics: Euclidean distance, cosine distance, and correlation coefficient. Finally, in order to show the effectiveness of the proposed model, we perform a computer simulation using real EEG data.

  7. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  8. Embedded prediction in feature extraction: application to single-trial EEG discrimination.

    PubMed

    Hsu, Wei-Yen

    2013-01-01

    In this study, an analysis system embedding neuron-fuzzy prediction in feature extraction is proposed for brain-computer interface (BCI) applications. Wavelet-fractal features combined with neuro-fuzzy predictions are applied for feature extraction in motor imagery (MI) discrimination. The features are extracted from the electroencephalography (EEG) signals recorded from participants performing left and right MI. Time-series predictions are performed by training 2 adaptive neuro-fuzzy inference systems (ANFIS) for respective left and right MI data. Features are then calculated from the difference in multi-resolution fractal feature vector (MFFV) between the predicted and actual signals through a window of EEG signals. Finally, the support vector machine is used for classification. The proposed method estimates its performance in comparison with the linear adaptive autoregressive (AAR) model and the AAR time-series prediction of 6 participants from 2 data sets. The results indicate that the proposed method is promising in MI classification. PMID:23248335

  9. Kinetic modeling of ultrasound-assisted extraction of phenolic compounds from grape marc: influence of acoustic energy density and temperature.

    PubMed

    Tao, Yang; Zhang, Zhihang; Sun, Da-Wen

    2014-07-01

    The effects of acoustic energy density (6.8-47.4 W/L) and temperature (20-50 °C) on the extraction yields of total phenolics and tartaric esters during ultrasound-assisted extraction from grape marc were investigated in this study. The ultrasound treatment was performed in a 25-kHz ultrasound bath system and the 50% aqueous ethanol was used as the solvent. The initial extraction rate and final extraction yield increased with the increase of acoustic energy density and temperature. The two site kinetic model was used to simulate the kinetics of extraction process and the diffusion model based on the Fick's second law was employed to determine the effective diffusion coefficient of phenolics in grape marc. Both models gave satisfactory quality of data fit. The diffusion process was divided into one fast stage and one slow stage and the diffusion coefficients in both stages were calculated. Within the current experimental range, the diffusion coefficients of total phenolics and tartaric esters for both diffusion stages increased with acoustic energy density. Meanwhile, the rise of temperature also resulted in the increase of diffusion coefficients of phenolics except the diffusion coefficient of total phenolics in the fast stage, the value of which being the highest at 40 °C. Moreover, an empirical equation was suggested to correlate the effective diffusion coefficient of phenolics in grape marc with acoustic energy density and temperature. In addition, the performance comparison of ultrasound-assisted extraction and convention methods demonstrates that ultrasound is an effective and promising technology to extract bioactive substances from grape marc.

  10. Geometric feature extraction by a multimarked point process.

    PubMed

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency.

  11. A finite element propagation model for extracting normal incidence impedance in nonprogressive acoustic wave fields

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Tanner, Sharon E.; Parrott, Tony L.

    1995-01-01

    A propagation model method for extracting the normal incidence impedance of an acoustic material installed as a finite length segment in a wall of a duct carrying a nonprogressive wave field is presented. The method recasts the determination of the unknown impedance as the minimization of the normalized wall pressure error function. A finite element propagation model is combined with a coarse/fine grid impedance plane search technique to extract the impedance of the material. Results are presented for three different materials for which the impedance is known. For each material, the input data required for the prediction scheme was computed from modal theory and then contaminated by random error. The finite element method reproduces the known impedance of each material almost exactly for random errors typical of those found in many measurement environments. Thus, the method developed here provides a means for determining the impedance of materials in a nonprogressirve wave environment such as that usually encountered in a commercial aircraft engine and most laboratory settings.

  12. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  13. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  14. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  15. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  16. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  17. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  18. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  19. Associations between voice ergonomic risk factors and acoustic features of the voice.

    PubMed

    Rantala, Leena M; Hakala, Suvi; Holmqvist, Sofia; Sala, Eeva

    2015-10-01

    The associations between voice ergonomic risk factors in 40 classrooms and the acoustic parameters of 40 schoolteachers' voices were investigated. The risk factors assessed were connected to participants' working practices, working postures, and the indoor air quality in their workplaces. The teachers recorded spontaneous speech and sustained /a/ before and after a working day. Fundamental frequency, sound pressure level, the slope of the spectrum, perturbation, and harmonic-to-noise ratio were analysed. The results showed that the more the voice ergonomic risk factors were involved, the louder the teachers' voices became. Working practices correlated most often with the acoustic parameters; associations were found especially before a working day. The results suggest that a risky voice ergonomic environment affects voice production. PMID:24007529

  20. Associations between voice ergonomic risk factors and acoustic features of the voice.

    PubMed

    Rantala, Leena M; Hakala, Suvi; Holmqvist, Sofia; Sala, Eeva

    2015-10-01

    The associations between voice ergonomic risk factors in 40 classrooms and the acoustic parameters of 40 schoolteachers' voices were investigated. The risk factors assessed were connected to participants' working practices, working postures, and the indoor air quality in their workplaces. The teachers recorded spontaneous speech and sustained /a/ before and after a working day. Fundamental frequency, sound pressure level, the slope of the spectrum, perturbation, and harmonic-to-noise ratio were analysed. The results showed that the more the voice ergonomic risk factors were involved, the louder the teachers' voices became. Working practices correlated most often with the acoustic parameters; associations were found especially before a working day. The results suggest that a risky voice ergonomic environment affects voice production.

  1. Acoustic features of infant vocalic utterances at 3, 6, and 9 months.

    PubMed

    Kent, R D; Murray, A D

    1982-08-01

    Recordings were obtained of the comfort-state vocalizations of infants at 3, 6, and 9 months of age during a session of play and vocal interaction with the infant's mother and the experimenter. Acoustic analysis, primarily spectrography, was used to determine utterance durations, formant frequencies of vocalic utterances, patterns of f0 frequency change during vocalizations, variations in source excitation of the vocal tract, and general properties of the utterances. Most utterances had durations of less than 400 ms although occasional sounds lasted 2 s or more. An increase in the ranges of both the F1 and F2 frequencies was observed across both periods of age increase, but the center of the F1-F2 plot for the group vowels appeared to change very little. Phonatory characteristics were at least generally compatible with published descriptions of infant cry. The f0 frequency averaged 445 Hz for 3-month-olds, 450 Hz for 6-month-olds, and 415 Hz for 9-month-olds. As has been previously reported for infant cry, the vocalizations frequently were associated with tremor (vibrato), harmonic doubling, abrupt f0 shift, vocal fry (or roll), and noise segments. Thus, from a strictly acoustic perspective, early cry and the later vocalizations of cooing and babbling appear to be vocal performances in continuity. Implications of the acoustic analyses are discussed for phonetic development and speech acquisition.

  2. The vocal repertoire of the domesticated zebra finch: a data-driven approach to decipher the information-bearing acoustic features of communication signals.

    PubMed

    Elie, Julie E; Theunissen, Frédéric E

    2016-03-01

    Although a universal code for the acoustic features of animal vocal communication calls may not exist, the thorough analysis of the distinctive acoustical features of vocalization categories is important not only to decipher the acoustical code for a specific species but also to understand the evolution of communication signals and the mechanisms used to produce and understand them. Here, we recorded more than 8000 examples of almost all the vocalizations of the domesticated zebra finch, Taeniopygia guttata: vocalizations produced to establish contact, to form and maintain pair bonds, to sound an alarm, to communicate distress or to advertise hunger or aggressive intents. We characterized each vocalization type using complete representations that avoided any a priori assumptions on the acoustic code, as well as classical bioacoustics measures that could provide more intuitive interpretations. We then used these acoustical features to rigorously determine the potential information-bearing acoustical features for each vocalization type using both a novel regularized classifier and an unsupervised clustering algorithm. Vocalization categories are discriminated by the shape of their frequency spectrum and by their pitch saliency (noisy to tonal vocalizations) but not particularly by their fundamental frequency. Notably, the spectral shape of zebra finch vocalizations contains peaks or formants that vary systematically across categories and that would be generated by active control of both the vocal organ (source) and the upper vocal tract (filter). PMID:26581377

  3. Simulation study and guidelines to generate Laser-induced Surface Acoustic Waves for human skin feature detection

    NASA Astrophysics Data System (ADS)

    Li, Tingting; Fu, Xing; Chen, Kun; Dorantes-Gonzalez, Dante J.; Li, Yanning; Wu, Sen; Hu, Xiaotang

    2015-12-01

    Despite the seriously increasing number of people contracting skin cancer every year, limited attention has been given to the investigation of human skin tissues. To this regard, Laser-induced Surface Acoustic Wave (LSAW) technology, with its accurate, non-invasive and rapid testing characteristics, has recently shown promising results in biological and biomedical tissues. In order to improve the measurement accuracy and efficiency of detecting important features in highly opaque and soft surfaces such as human skin, this paper identifies the most important parameters of a pulse laser source, as well as provides practical guidelines to recommended proper ranges to generate Surface Acoustic Waves (SAWs) for characterization purposes. Considering that melanoma is a serious type of skin cancer, we conducted a finite element simulation-based research on the generation and propagation of surface waves in human skin containing a melanoma-like feature, determine best pulse laser parameter ranges of variation, simulation mesh size and time step, working bandwidth, and minimal size of detectable melanoma.

  4. Prosodic strengthening and featural enhancement: Evidence from acoustic and articulatory realizations of /opena,eye/ in English

    NASA Astrophysics Data System (ADS)

    Cho, Taehong

    2005-06-01

    In this study the effects of accent and prosodic boundaries on the production of English vowels (/opena,eye/), by concurrently examining acoustic vowel formants and articulatory maxima of the tongue, jaw, and lips obtained with EMA (Electromagnetic Articulography) are investigated. The results demonstrate that prosodic strengthening (due to accent and/or prosodic boundaries) has differential effects depending on the source of prominence (in accented syllables versus at edges of prosodic domains; domain initially versus domain finally). The results are interpreted in terms of how the prosodic strengthening is related to phonetic realization of vowel features. For example, when accented, /eye/ was fronter in both acoustic and articulatory vowel spaces (enhancing [-back]), accompanied by an increase in both lip and jaw openings (enhancing sonority). By contrast, at edges of prosodic domains (especially domain-finally), /eye/ was not necessarily fronter, but higher (enhancing [+high]), accompanied by an increase only in the lip (not jaw) opening. This suggests that the two aspects of prosodic structure (accent versus boundary) are differentiated by distinct phonetic patterns. Further, it implies that prosodic strengthening, though manifested in fine-grained phonetic details, is not simply a low-level phonetic event but a complex linguistic phenomenon, closely linked to the enhancement of phonological features and positional strength that may license phonological contrasts. .

  5. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  6. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  7. Biometric person authentication method using features extracted from pen holding style

    NASA Astrophysics Data System (ADS)

    Hashimoto, Yuuki; Muramatsu, Daigo; Ogata, Hiroyuki

    2010-04-01

    The manner of holding a pen is distinctive among people. Therefore, pen holding style is useful for person authentication. In this paper, we propose a biometric person authentication method using features extracted from images of pen holding style. Images of the pen holding style are captured by a camera, and several features are extracted from the captured images. These features are compared with a reference dataset to calculate dissimilarity scores, and these scores are combined for verification using a three-layer perceptron. Preliminary experiments were performed by using a private database. The proposed system yielded an equal error rate (EER) of 2.6%.

  8. Invariant feature extraction for color image mosaic by graph card processing

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Chen, Lin; Li, Deren

    2009-10-01

    Image mosaic can be widely used in remote measuring, scout in battlefield and Panasonic image demonstration. In this project, we find a general method for video (or sequence images) mosaic by techniques, such as extracting invariant features, gpu processing, multi-color feature selection, ransac algorithm for homograph matching. In order to match the image sequence automatically without influence of rotation, scale and contrast transform, local invariant feature descriptor have been extracted by graph card unit. The gpu mosaic algorithm performs very well that can be compare to slow CPU version of mosaic program with little cost time.

  9. Interaction of dust-ion acoustic solitary waves in nonplanar geometry with electrons featuring Tsallis distribution

    SciTech Connect

    Narayan Ghosh, Uday; Chatterjee, Prasanta; Tribeche, Mouloud

    2012-11-15

    The head-on collisions between nonplanar dust-ion acoustic solitary waves are dealt with by an extended version of Poincare-Lighthill-Kuo perturbation method, for a plasma having stationary dust grains, inertial ions, and nonextensive electrons. The nonplanar geometry modified analytical phase-shift after a head-on collision is derived. It is found that as the nonextensive character of the electrons becomes important, the phase-shift decreases monotonically before levelling-off at a constant value. This leads us to think that nonextensivity may have a stabilizing effect on the phase-shift.

  10. [Determination of Soluble Solid Content in Strawberry Using Hyperspectral Imaging Combined with Feature Extraction Methods].

    PubMed

    Ding, Xi-bin; Zhang, Chu; Liu, Fei; Song, Xing-lin; Kong, Wen-wen; He, Yong

    2015-04-01

    Hyperspectral imaging combined with feature extraction methods were applied to determine soluble sugar content (SSC) in mature and scatheless strawberry. Hyperspectral images of 154 strawberries covering the spectral range of 874-1,734 nm were captured and the spectral data were extracted from the hyperspectral images, and the spectra of 941~1,612 nm were preprocessed by moving average (MA). Nineteen samples were defined as outliers by the residual method, and the remaining 135 samples were divided into the calibration set (n = 90) and the prediction set (n = 45). Successive projections algorithm (SPA), genetic algorithm partial least squares (GAPLS) combined with SPA, weighted regression coefficient (Bw) and competitive adaptive reweighted sampling (CARS) were applied to select 14, 17, 24 and 25 effective wavelengths, respectively. Principal component analysis (PCA) and wavelet transform (WT) were applied to extract feature information with 20 and 58 features, respectively. PLS models were built based on the full spectra, the effective wavelengths and the features, respectively. All PLS models obtained good results. PLS models using full-spectra and features extracted by WT obtained the best results with correlation coefficient of calibration (r(c)) and correlation coefficient of prediction (r(p)) over 0.9. The overall results indicated that hyperspectral imaging combined with feature extraction methods could be used for detection of SSC in strawberry. PMID:26197594

  11. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  12. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  13. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  14. Feature Extraction for BCIs Based on Electromagnetic Source Localization and Multiclass Filter Bank Common Spatial Patterns.

    PubMed

    Zaitcev, Aleksandr; Cook, Greg; Wei Liu; Paley, Martyn; Milne, Elizabeth

    2015-08-01

    Brain-Computer Interfaces (BCIs) provide means for communication and control without muscular movement and, therefore, can offer significant clinical benefits. Electrical brain activity recorded by electroencephalography (EEG) can be interpreted into software commands by various classification algorithms according to the descriptive features of the signal. In this paper we propose a novel EEG BCI feature extraction method employing EEG source reconstruction and Filter Bank Common Spatial Patterns (FBCSP) based on Joint Approximate Diagonalization (JAD). The proposed method is evaluated by the commonly used reference EEG dataset yielding an average classification accuracy of 77.1 ± 10.1 %. It is shown that FBCSP feature extraction applied to reconstructed source components outperforms conventional CSP and FBCSP feature extraction methods applied to signals in the sensor domain.

  15. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  16. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  17. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  18. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  19. [Quantitative analysis of thiram by surface-enhanced raman spectroscopy combined with feature extraction Algorithms].

    PubMed

    Zhang, Bao-hua; Jiang, Yong-cheng; Sha, Wen; Zhang, Xian-yi; Cui, Zhi-feng

    2015-02-01

    Three feature extraction algorithms, such as the principal component analysis (PCA), the discrete cosine transform (DCT) and the non-negative factorization (NMF), were used to extract the main information of the spectral data in order to weaken the influence of the spectral fluctuation on the subsequent quantitative analysis results based on the SERS spectra of the pesticide thiram. Then the extracted components were respectively combined with the linear regression algorithm--the partial least square regression (PLSR) and the non-linear regression algorithm--the support vector machine regression (SVR) to develop the quantitative analysis models. Finally, the effect of the different feature extraction algorithms on the different kinds of the regression algorithms was evaluated by using 5-fold cross-validation method. The experiments demonstrate that the analysis results of SVR are better than PLSR for the non-linear relationship between the intensity of the SERS spectrum and the concentration of the analyte. Further, the feature extraction algorithms can significantly improve the analysis results regardless of the regression algorithms which mainly due to extracting the main information of the source spectral data and eliminating the fluctuation. Additionally, PCA performs best on the linear regression model and NMF is best on the non-linear model, and the predictive error can be reduced nearly three times in the best case. The root mean square error of cross-validation of the best regression model (NMF+SVR) is 0.0455 micormol x L(-1) (10(-6) mol x L(-1)), and it attains the national detection limit of thiram, so the method in this study provides a novel method for the fast detection of thiram. In conclusion, the study provides the experimental references the selecting the feature extraction algorithms on the analysis of the SERS spectrum, and some common findings of feature extraction can also help processing of other kinds of spectroscopy.

  20. Nonlinear ion-acoustic double-layers in electronegative plasmas with electrons featuring Tsallis distribution

    NASA Astrophysics Data System (ADS)

    Ghebache, Siham; Tribeche, Mouloud

    2016-04-01

    Weakly nonlinear ion-acoustic (IA) double-layers (DLs), which accompany electronegative plasmas composed of positive ions, negative ions, and nonextensive electrons are investigated. A generalized Korteweg-de Vries equation with a cubic nonlinearity is derived using a reductive perturbation method. Different types of electronegative plasmas inspired from the experimental studies of Ichiki et al. (2001) are discussed. It is shown that the IA wave phase velocity, in different mixtures of negative and positive ions, decreases as the nonextensive parameter q increases, before levelling-off at a constant value for larger q. Moreover, a relative increase of Q involves an enhancement of the IA phase velocity. Existence domains of either solitary waves or double-layers are then presented and their parametric dependence is determined. Owing to the electron nonextensivity, our present plasma model can admit compressive as well as rarefactive IA-DLs.

  1. Nonlinear features of ion acoustic shock waves in dissipative magnetized dusty plasma

    SciTech Connect

    Sahu, Biswajit; Sinha, Anjana; Roychoudhury, Rajkumar

    2014-10-15

    The nonlinear propagation of small as well as arbitrary amplitude shocks is investigated in a magnetized dusty plasma consisting of inertia-less Boltzmann distributed electrons, inertial viscous cold ions, and stationary dust grains without dust-charge fluctuations. The effects of dissipation due to viscosity of ions and external magnetic field, on the properties of ion acoustic shock structure, are investigated. It is found that for small amplitude waves, the Korteweg-de Vries-Burgers (KdVB) equation, derived using Reductive Perturbation Method, gives a qualitative behaviour of the transition from oscillatory wave to shock structure. The exact numerical solution for arbitrary amplitude wave differs somehow in the details from the results obtained from KdVB equation. However, the qualitative nature of the two solutions is similar in the sense that a gradual transition from KdV oscillation to shock structure is observed with the increase of the dissipative parameter.

  2. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  3. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    PubMed Central

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  4. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set.

    PubMed

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.

  5. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  6. [ACOUSTIC FEATURES OF VOCALIZATIONS, REFLECTING THE DISCOMFORT AND COMFORT STATE OF INFANTS AGED THREE AND SIX MONTHS].

    PubMed

    Pavlikova, M I; Makarov, A K; Lyakso, E E

    2015-08-01

    The paper presented the possibility of recognition by adult the comfort and discomfort state of 3 and 6 months old infant's on the base of their vocalizations. The acoustic features of the vocalizations that are important for the recognition of the infant state of the characteristics of voice was described. It is shown that discomfort vocalizations differ from comfort ones on the basis of the average and maximum values of pitch, pitch values in the central and final part of the vocalization. A mathematical model is proposed and described a classification function signal of discomfort and comfort. Was found that the vocalizations of infants attributable adults with a probability of 0.75 and above the categories of comfort and discomfort with high reliability are recognized by the mathematical model based on a classification function.

  7. Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.

    PubMed

    Segovia, F; Górriz, J M; Ramírez, J; Phillips, C; For The Alzheimer's Disease Neuroimaging Initiative

    2016-01-01

    Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods. PMID:26567734

  8. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/.

  9. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/. PMID:26499212

  10. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  11. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  12. Extraction, modelling, and use of linear features for restitution of airborne hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lee, Changno; Bethel, James S.

    This paper presents an approach for the restitution of airborne hyperspectral imagery with linear features. The approach consisted of semi-automatic line extraction and mathematical modelling of the linear features. First, the line was approximately determined manually and refined using dynamic programming. The extracted lines could then be used as control data with the ground information of the lines, or as constraints with simple assumption for the ground information of the line. The experimental results are presented numerically in tables of RMS residuals of check points as well as visually in ortho-rectified images.

  13. Adaptive spectral window sizes for extraction of diagnostic features from optical spectra

    NASA Astrophysics Data System (ADS)

    Kan, Chih-Wen; Lee, Andy Y.; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2010-07-01

    We present an approach to adaptively adjust the spectral window sizes for optical spectra feature extraction. Previous studies extracted features from spectral windows of a fixed width. In our algorithm, piecewise linear regression is used to adaptively adjust the window sizes to find the maximum window size with reasonable linear fit with the spectrum. This adaptive windowing technique ensures the signal linearity in defined windows; hence, the adaptive windowing technique retains more diagnostic information while using fewer windows. This method was tested on a data set of diffuse reflectance spectra of oral mucosa lesions. Eight features were extracted from each window. We performed classifications using linear discriminant analysis with cross-validation. Using windowing techniques results in better classification performance than not using windowing. The area under the receiver-operating-characteristics curve for windowing techniques was greater than a nonwindowing technique for both normal versus mild dysplasia (MD) plus severe high-grade dysplasia or carcinama (SD) (MD+SD) and benign versus MD+SD. Although adaptive and fixed-size windowing perform similarly, adaptive windowing utilizes significantly fewer windows than fixed-size windows (number of windows per spectrum: 8 versus 16). Because adaptive windows retain most diagnostic information while reducing the number of windows needed for feature extraction, our results suggest that it isolates unique diagnostic features in optical spectra.

  14. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  15. Application of multi-scale feature extraction to surface defect classification of hot-rolled steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Ai, Yong-hao; Wu, Xiu-yong

    2013-01-01

    Feature extraction is essential to the classification of surface defect images. The defects of hot-rolled steels distribute in different directions. Therefore, the methods of multi-scale geometric analysis (MGA) were employed to decompose the image into several directional subbands at several scales. Then, the statistical features of each subband were calculated to produce a high-dimensional feature vector, which was reduced to a lower-dimensional vector by graph embedding algorithms. Finally, support vector machine (SVM) was used for defect classification. The multi-scale feature extraction method was implemented via curvelet transform and kernel locality preserving projections (KLPP). Experiment results show that the proposed method is effective for classifying the surface defects of hot-rolled steels and the total classification rate is up to 97.33%.

  16. Extraction of the acoustic component of a turbulent flow exciting a plate by inverting the vibration problem

    NASA Astrophysics Data System (ADS)

    Lecoq, D.; Pézerat, C.; Thomas, J.-H.; Bi, W. P.

    2014-06-01

    An improvement of the Force Analysis Technique (FAT), an inverse method of vibration, is proposed to identify the low wavenumbers including the acoustic component of a turbulent flow that excites a plate. This method is a significant progress since the usual techniques of measurements with flush-mounted sensors are not able to separate the acoustic and the aerodynamic energies of the excitation because the aerodynamic component is too high. Moreover, the main cause of vibration or acoustic radiation of the structure might be due to the acoustic part by a phenomenon of spatial coincidence between the acoustic wavelengths and those of the plate. This underlines the need to extract the acoustic part. In this work, numerical experiments are performed to solve both the direct and inverse problems of vibration. The excitation is a turbulent boundary layer and combines the pressure field of the Corcos model and a diffuse acoustic field. These pressures are obtained by a synthesis method based on the Cholesky decomposition of the cross-spectra matrices and are used to excite a plate. Thus, the application of the inverse problem FAT that requires only the vibration data shows that the method is able to identify and to isolate the acoustic part of the excitation. Indeed, the discretization of the inverse operator (motion equation of the plate) acts as a low-pass wavenumber filter. In addition, this method is simple to implement because it can be applied locally (no need to know the boundary conditions), and measurements can be carried out on the opposite side of the plate without affecting the flow. Finally, an improvement of FAT is proposed. It regularizes optimally and automatically the inverse problem by analyzing the mean quadratic pressure of the reconstructed force distribution. This optimized FAT, in the case of the turbulent flow, has the advantage of measuring the acoustic component up to higher frequencies even in the presence of noise. the aerodynamic component

  17. System and method for investigating sub-surface features of a rock formation using compressional acoustic sources

    DOEpatents

    Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt; Johnson, Paul A.; Guyer, Robert; Ten Cate, James A.; Le Bas, Pierre-Yves; Larmat, Carene S.

    2016-09-27

    A system and method for investigating rock formations outside a borehole are provided. The method includes generating a first compressional acoustic wave at a first frequency by a first acoustic source; and generating a second compressional acoustic wave at a second frequency by a second acoustic source. The first and the second acoustic sources are arranged within a localized area of the borehole. The first and the second acoustic waves intersect in an intersection volume outside the borehole. The method further includes receiving a third shear acoustic wave at a third frequency, the third shear acoustic wave returning to the borehole due to a non-linear mixing process in a non-linear mixing zone within the intersection volume at a receiver arranged in the borehole. The third frequency is equal to a difference between the first frequency and the second frequency.

  18. Computer-aided diagnosis of rheumatoid arthritis with optical tomography, Part 1: feature extraction

    PubMed Central

    Jia, Jingfei; Kim, Hyun K.; Netz, Uwe J.; Blaschke, Sabine; Müller, Gerhard A.

    2013-01-01

    Abstract. This is the first part of a two-part paper on the application of computer-aided diagnosis to diffuse optical tomography (DOT). An approach for extracting heuristic features from DOT images and a method for using these features to diagnose rheumatoid arthritis (RA) are presented. Feature extraction is the focus of Part 1, while the utility of five classification algorithms is evaluated in Part 2. The framework is validated on a set of 219 DOT images of proximal interphalangeal (PIP) joints. Overall, 594 features are extracted from the absorption and scattering images of each joint. Three major findings are deduced. First, DOT images of subjects with RA are statistically different (p<0.05) from images of subjects without RA for over 90% of the features investigated. Second, DOT images of subjects with RA that do not have detectable effusion, erosion, or synovitis (as determined by MRI and ultrasound) are statistically indistinguishable from DOT images of subjects with RA that do exhibit effusion, erosion, or synovitis. Thus, this subset of subjects may be diagnosed with RA from DOT images while they would go undetected by reviews of MRI or ultrasound images. Third, scattering coefficient images yield better one-dimensional classifiers. A total of three features yield a Youden index greater than 0.8. These findings suggest that DOT may be capable of distinguishing between PIP joints that are healthy and those affected by RA with or without effusion, erosion, or synovitis. PMID:23856915

  19. Shape-based and texture-based feature extraction for classification of microcalcifications in mammograms

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Pourabdollah-Nezhad, Siamak; Rafiee Rad, Farshid

    2001-07-01

    This paper presents and compares two image processing methods for differentiating benign from malignant microcalcifications in mammograms. The gold standard method for differentiating benign from malignant microcalcifications is biopsy, which is invasive. The goal of the proposed methods is to reduce rate of biopsies with negative results. In the first method, we extract 17 shape features from each mammogram. These features are related to shapes of individual microcalcifications or to their clusters. In the second method, we extract 44 texture features from each mammogram using co-occurrence method of Haralick. Next, we select best features from each set using a genetic algorithm, to maximize area under ROC curve. This curve is created using a k-nearest neighbor (kNN) classifier and a malignancy criterion. Finally, we evaluate the methods by comparing ROC's with greatest areas obtained using each method. We applied the proposed methods, with different values of k in kNN classifier, to 74 malignant and 29 benign microcalcification clusters. Truth for each mammogram was established based on the biopsy results. We found greatest area under ROC curve for each set of features used in each method. For shape features this area was 0.82 (k = 7) and for Haralick features it was 0.72(k=9).

  20. [Research on the methods for electroencephalogram feature extraction based on blind source separation].

    PubMed

    Wang, Jiang; Zhang, Huiyuan; Wang, Lei; Xu, Guizhi

    2014-12-01

    In the present investigation, we studied four methods of blind source separation/independent component analysis (BSS/ICA), AMUSE, SOBI, JADE, and FastICA. We did the feature extraction of electroencephalogram (EEG) signals of brain computer interface (BCI) for classifying spontaneous mental activities, which contained four mental tasks including imagination of left hand, right hand, foot and tongue movement. Different methods of extract physiological components were studied and achieved good performance. Then, three combined methods of SOBI and FastICA for extraction of EEG features of motor imagery were proposed. The results showed that combining of SOBI and ICA could not only reduce various artifacts and noise but also localize useful source and improve accuracy of BCI. It would improve further study of physiological mechanisms of motor imagery. PMID:25868229

  1. Feature extraction using adaptive multiwavelets and synthetic detection index for rotor fault diagnosis of rotating machinery

    NASA Astrophysics Data System (ADS)

    Lu, Na; Xiao, Zhihuai; Malik, O. P.

    2015-02-01

    State identification to diagnose the condition of rotating machinery is often converted to a classification problem of values of non-dimensional symptom parameters (NSPs). To improve the sensitivity of the NSPs to the changes in machine condition, a novel feature extraction method based on adaptive multiwavelets and the synthetic detection index (SDI) is proposed in this paper. Based on the SDI maximization principle, optimal multiwavelets are searched by genetic algorithms (GAs) from an adaptive multiwavelets library and used for extracting fault features from vibration signals. By the optimal multiwavelets, more sensitive NSPs can be extracted. To examine the effectiveness of the optimal multiwavelets, conventional methods are used for comparison study. The obtained NSPs are fed into K-means classifier to diagnose rotor faults. The results show that the proposed method can effectively improve the sensitivity of the NSPs and achieve a higher discrimination rate for rotor fault diagnosis than the conventional methods.

  2. Automatic extraction of initial moving object based on advanced feature and video analysis

    NASA Astrophysics Data System (ADS)

    Liu, Mao-Ying; Dai, Qiong-Hai; Liu, Xiao-Dong; Er, Gui-Hua

    2005-07-01

    Traditionally, video segmentation usually extracts object using low-level features such as color, texture, edge, motion, and optical flow. This paper originally proposes that the connectivity of object motion is an advanced feature of video moving object because it can reflect semantic meanings of object to some extent. And it can be fully represented on cumulated difference image which is the combination of a certain number of interframe difference images. Based on this principle, a novel system is designed to extract initial moving object automatically. The system includes 3 key innovations: 1) System is applied on cumulated difference image which can make object more prominent than background noise. Object extraction is based on the connectivity of object motion and it can guarantee the integrity of the extracted object while eliminate big background regions which cannot be removed by conventional change detection methods, for example, intense-noise regions and shadow regions that are not connected tightly to object. 2) Video sequence analysis is performed ahead of video segmentation. Proper object extraction methods are adopted according to the characteristics of background noise and object motion. 3) The adaptive threshold is automatically determined on cumulated difference image after acute noises is removed. The threshold determined here is more reasonable. And with it, most noise can be eliminated while small-motion regions of object are preserved. Results show that this system can extract object in different kinds of sequences automatically, promptly and properly. Thus, this system is very suitable for real time video applications.

  3. [Research on non-rigid medical image registration algorithm based on SIFT feature extraction].

    PubMed

    Wang, Anna; Lu, Dan; Wang, Zhe; Fang, Zhizhen

    2010-08-01

    In allusion to non-rigid registration of medical images, the paper gives a practical feature points matching algorithm--the image registration algorithm based on the scale-invariant features transform (Scale Invariant Feature Transform, SIFT). The algorithm makes use of the image features of translation, rotation and affine transformation invariance in scale space to extract the image feature points. Bidirectional matching algorithm is chosen to establish the matching relations between the images, so the accuracy of image registrations is improved. On this basis, affine transform is chosen to complement the non-rigid registration, and normalized mutual information measure and PSO optimization algorithm are also chosen to optimize the registration process. The experimental results show that the method can achieve better registration results than the method based on mutual information.

  4. A Feature Extraction Method for Fault Classification of Rolling Bearing based on PCA

    NASA Astrophysics Data System (ADS)

    Wang, Fengtao; Sun, Jian; Yan, Dawen; Zhang, Shenghua; Cui, Liming; Xu, Yong

    2015-07-01

    This paper discusses the fault feature selection using principal component analysis (PCA) for bearing faults classification. Multiple features selected from the time-frequency domain parameters of vibration signals are analyzed. First, calculate the time domain statistical features, such as root mean square and kurtosis; meanwhile, by Fourier transformation and Hilbert transformation, the frequency statistical features are extracted from the frequency spectrum. Then the PCA is used to reduce the dimension of feature vectors drawn from raw vibration signals, which can improve real time performance and accuracy of the fault diagnosis. Finally, a fuzzy C-means (FCM) model is established to implement the diagnosis of rolling bearing faults. Practical rolling bearing experiment data is used to verify the effectiveness of the proposed method.

  5. Multi-scale Analysis of High Resolution Topography: Feature Extraction and Identification of Landscape Characteristic Scales

    NASA Astrophysics Data System (ADS)

    Passalacqua, P.; Sangireddy, H.; Stark, C. P.

    2015-12-01

    With the advent of digital terrain data, detailed information on terrain characteristics and on scale and location of geomorphic features is available over extended areas. Our ability to observe landscapes and quantify topographic patterns has greatly improved, including the estimation of fluxes of mass and energy across landscapes. Challenges still remain in the analysis of high resolution topography data; the presence of features such as roads, for example, challenges classic methods for feature extraction and large data volumes require computationally efficient extraction and analysis methods. Moreover, opportunities exist to define new robust metrics of landscape characterization for landscape comparison and model validation. In this presentation we cover recent research in multi-scale and objective analysis of high resolution topography data. We show how the analysis of the probability density function of topographic attributes such as slope, curvature, and topographic index contains useful information for feature localization and extraction. The analysis of how the distributions change across scales, quantified by the behavior of modal values and interquartile range, allows the identification of landscape characteristic scales, such as terrain roughness. The methods are introduced on synthetic signals in one and two dimensions and then applied to a variety of landscapes of different characteristics. Validation of the methods includes the analysis of modeled landscapes where the noise distribution is known and features of interest easily measured.

  6. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  7. A Novel Hyperspectral Feature-Extraction Algorithm Based on Waveform Resolution for Raisin Classification.

    PubMed

    Zhao, Yun; Xu, Xing; He, Yong

    2015-12-01

    Near-infrared hyperspectral imaging technology was adopted in this study to discriminate among varieties of raisins produced in Xinjiang Uygur Autonomous Region, China. Eight varieties of raisins were used in the research, and the wavelengths of the hyperspectral images were from 900 to 1700 nm. A novel waveform resolution method is proposed to reduce the hyperspectral data and extract the features. The waveform-resolution method compresses the original hyperspectral data for one pixel into five amplitudes, five frequencies, and five phases for 15 feature values in all. A neural network was established with three layers-eight neurons for the first layer, three neurons for the hidden layer, and one neuron for the output layer-based on the 15 features used to determine the varieties of raisins. The accuracies of the model, which are presented as sensitivity, precision, and specificity, for the testing data set, are 93.38, 81.92, and 99.06%. This is higher than the accuracies of the model using a conventional principal component analysis feature-extracting method combined with a neural network, which has a sensitivity of 82.13%, precision of 82.22%, and specificity of 97.45%. The results indicate that the proposed waveform-resolution feature-extracting method combined with hyperspectral imaging technology is an efficient method for determining varieties of raisins. PMID:26555391

  8. Features extraction of EMG signal using time domain analysis for arm rehabilitation device

    NASA Astrophysics Data System (ADS)

    Jali, Mohd Hafiz; Ibrahim, Iffah Masturah; Sulaima, Mohamad Fani; Bukhari, W. M.; Izzuddin, Tarmizi Ahmad; Nasir, Mohamad Na'im

    2015-05-01

    Rehabilitation device is used as an exoskeleton for people who had failure of their limb. Arm rehabilitation device may help the rehab program whom suffers from arm disability. The device that is used to facilitate the tasks of the program should improve the electrical activity in the motor unit and minimize the mental effort of the user. Electromyography (EMG) is the techniques to analyze the presence of electrical activity in musculoskeletal systems. The electrical activity in muscles of disable person is failed to contract the muscle for movements. In order to prevent the muscles from paralysis becomes spasticity, the force of movements should minimize the mental efforts. Therefore, the rehabilitation device should analyze the surface EMG signal of normal people that can be implemented to the device. The signal is collected according to procedure of surface electromyography for non-invasive assessment of muscles (SENIAM). The EMG signal is implemented to set the movements' pattern of the arm rehabilitation device. The filtered EMG signal was extracted for features of Standard Deviation (STD), Mean Absolute Value (MAV) and Root Mean Square (RMS) in time-domain. The extraction of EMG data is important to have the reduced vector in the signal features with less of error. In order to determine the best features for any movements, several trials of extraction methods are used by determining the features with less of errors. The accurate features can be use for future works of rehabilitation control in real-time.

  9. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  10. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  11. Extraction of ABCD rule features from skin lesions images with smartphone.

    PubMed

    Rosado, Luís; Castro, Rui; Ferreira, Liliana; Ferreira, Márcia

    2012-01-01

    One of the greatest challenges in dermatology today is the early detection of melanoma since the success rates of curing this type of cancer are very high if detected during the early stages of its development. The main objective of the work presented in this paper is to create a prototype of a patient-oriented system for skin lesion analysis using a smartphone. This work aims at implementing a self-monitoring system that collects, processes, and stores information of skin lesions through the automatic extraction of specific visual features. The selection of the features was based on the ABCD rule, which considers 4 visual criteria considered highly relevant for the detection of malignant melanoma. The algorithms used to extract these features are briefly described and the results achieved using images taken from the smartphone camera are discussed.

  12. Purification and feature extraction of shaft orbits for diagnosing large rotating machinery

    NASA Astrophysics Data System (ADS)

    Shi, D. F.; Wang, W. J.; Unsworth, P. J.; Qu, L. S.

    2005-01-01

    Vibration-based diagnosis has been employed as a powerful tool in maintaining the operating efficiency and safety for large rotating machinery. However, due to some inherent shortages, it is not accurate enough to extract the features of malfunctions by using traditional vibration signal processing techniques. In this paper, a high-resolution spectrum is firstly proposed to calculate the amplitude, frequency and phase details of sinusoidal harmonic and sub-harmonic vibration in large rotating machinery. Secondly, on the basis of a high-resolution spectrum, a purified shaft orbit is reconstructed to remove the interference terms. The moment and curve features, which are invariant to translation, scaling and rotation of the shaft orbit, are introduced to extract the features from the purified vibration orbit. This novel scheme is shown to be very effective and reliable in diagnosing several types of malfunctions in gas turbines and compressors under operating conditions as well as in the run-up stages.

  13. The extraction and use of facial features in low bit-rate visual communication.

    PubMed

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  14. Lexical and Acoustic Features of Maternal Utterances Addressing Preverbal Infants in Picture Book Reading Link to 5-Year-Old Children's Language Development

    ERIC Educational Resources Information Center

    Liu, Huei-Mei

    2014-01-01

    Research Findings: I examined the long-term association between the lexical and acoustic features of maternal utterances during book reading and the language skills of infants and children. Maternal utterances were collected from 22 mother-child dyads in picture book-reading episodes when children were ages 6-12 months and 5 years. Two aspects of…

  15. Acoustic Feature Optimization Based on F-Ratio for Robust Speech Recognition

    NASA Astrophysics Data System (ADS)

    Sun, Yanqing; Zhou, Yu; Zhao, Qingwei; Yan, Yonghong

    This paper focuses on the problem of performance degradation in mismatched speech recognition. The F-Ratio analysis method is utilized to analyze the significance of different frequency bands for speech unit classification, and we find that frequencies around 1kHz and 3kHz, which are the upper bounds of the first and the second formants for most of the vowels, should be emphasized in comparison to the Mel-frequency cepstral coefficients (MFCC). The analysis result is further observed to be stable in several typical mismatched situations. Similar to the Mel-Frequency scale, another frequency scale called the F-Ratio-scale is thus proposed to optimize the filter bank design for the MFCC features, and make each subband contains equal significance for speech unit classification. Under comparable conditions, with the modified features we get a relative 43.20% decrease compared with the MFCC in sentence error rate for the emotion affected speech recognition, 35.54%, 23.03% for the noisy speech recognition at 15dB and 0dB SNR (signal to noise ratio) respectively, and 64.50% for the three years' 863 test data. The application of the F-Ratio analysis on the clean training set of the Aurora2 database demonstrates its robustness over languages, texts and sampling rates.

  16. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  17. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    PubMed

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  18. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    PubMed

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database. PMID:25680220

  19. Acoustic Longitudinal Field NIF Optic Feature Detection Map Using Time-Reversal & MUSIC

    SciTech Connect

    Lehman, S K

    2006-02-09

    We developed an ultrasonic longitudinal field time-reversal and MUltiple SIgnal Classification (MUSIC) based detection algorithm for identifying and mapping flaws in fused silica NIF optics. The algorithm requires a fully multistatic data set, that is one with multiple, independently operated, spatially diverse transducers, each transmitter of which, in succession, launches a pulse into the optic and the scattered signal measured and recorded at every receiver. We have successfully localized engineered ''defects'' larger than 1 mm in an optic. We confirmed detection and localization of 3 mm and 5 mm features in experimental data, and a 0.5 mm in simulated data with sufficiently high signal-to-noise ratio. We present the theory, experimental results, and simulated results.

  20. Local rigid registration for multimodal texture feature extraction from medical images

    NASA Astrophysics Data System (ADS)

    Steger, Sebastian

    2011-03-01

    The joint extraction of texture features from medical images of different modalities requires an accurate image registration at the target structures. In many cases rigid registration of the entire images does not achieve the desired accuracy whereas deformable registration is too complex and may result in undesired deformations. This paper presents a novel region of interest alignment approach based on local rigid registration enabling image fusion for multimodal texture feature extraction. First rigid registration on the entire images is performed to obtain an initial guess. Then small cubic regions around the target structure are clipped from all images and individually rigidly registered. The approach was applied to extract texture features in clinically acquired CT and MR images from lymph nodes in the oropharynx for an oral cancer reoccurrence prediction framework. Visual inspection showed that in all of the 30 cases at least a subtle misalignment was perceivable for the globally rigidly aligned images. After applying the presented approach the alignment of the target structure significantly improved in 19 cases. In 12 cases no alignment mismatch whatsoever was perceptible without requiring the complexity of deformable registration and without deforming the target structure. Further investigation showed that if the resolutions of the individual modalities differ significantly, partial volume effects occur, diminishing the significance of the multimodal features even for perfectly aligned images.

  1. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  2. Dynamic-Feature Extraction, Attribution and Reconstruction (DEAR) Method for Power System Model Reduction

    SciTech Connect

    Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.

    2014-09-04

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.

  3. Mine detection using variational methods for image enhancement and feature extraction

    NASA Astrophysics Data System (ADS)

    Szymczak, William G.; Guo, Weiming; Rogers, Joel Clark W.

    1998-09-01

    A critical part of automatic classification algorithms is the extraction of features which distinguish targets from background noise and clutter. The focus of this paper is the use of variational methods for improving the classification of sea mines from both side-scan sonar and laser line-scan images. These methods are based on minimizing a functional of the image intensity. Examples include Total Variation Minimization (TVM) which is very effective for reducing the noise of an image without compromising its edge features, and Mumford-Shah segmentation, which in its simplest form, provides an optimal piecewise constant partition of the image. For the sonar side-scan images it is shown that a combination of these two variational methods, (first reducing the noise using TVM, then using segmentation) outperforms the use of either one individually for the extraction of minelike features. Multichannel segmentation based on a wavelet decomposition is also effectively used to declutter a sonar image. Finally, feature extraction and classification using segmentation is demonstrated on laser line-scan images of mines in a cluttered sea floor.

  4. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  5. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  6. A new procedure for extracting fault feature of multi-frequency signal from rotating machinery

    NASA Astrophysics Data System (ADS)

    Xiong, Xin; Yang, Shixi; Gan, Chunbiao

    2012-10-01

    Modern rotating machinery is built as a multi-rotor and multi-bearing system, and complex factors from rub or misalignment fault, etc., can lead to high nonlinearity of the system and non-stationarity of vibration signals. As a wide spectrum of frequency components is likely generated due to these complex factors, feature extraction becomes very important for fault diagnosis of a rotor system, e.g., rotor-to-stator rub and rotor misalignment. In recent years, the Hilbert-Huang transform (HHT), combining the empirical mode decomposition (EMD) algorithm with the Hilbert transform (HT) is commonly used in vibration signal analysis and also turns out to be very effective in dealing with non-stationary signals. Nevertheless, most intrinsic mode functions (IMFs) from the EMD are multi-frequency, and the extracted instantaneous frequency (IF) curves usually show irregularities, which raises difficulty in interpreting these features of the signal by the HHT spectrogram. In this study, a new procedure, combining the customary HHT with a fourth-order spectral analysis tool named Kurtogram, is developed to extract high-frequency features from several kinds of faulty signals, where the Kurtogram is applied to locate the non-stationary intra- and inter-wave modulation components in the original signals and produce more monochromatic IMFs. It is shown that the newly developed feature extraction procedure can accurately detect and characterize the fault feature information hidden in a multi-frequency signal, which is validated by a rub test from a rotor-bearing assembly and a misalignment signal test from a turbo-compressor machine set.

  7. Techniques for classifying acoustic resonant spectra

    SciTech Connect

    Roberts, R.S.; Lewis, P.S.; Chen, J.T.; Vela, O.A.

    1995-12-31

    A second-generation nondestructive evaluation (NDE) system that discriminates between different types of chemical munitions is under development. The NDE system extracts features from the acoustic spectra of known munitions, builds templates from these features, and performs classification by comparing features extracted from an unknown munition to a template library. Improvements over first-generation feature extraction template construction and classification algorithms are reported. Results are presented on the performance of the system and a large data set collected from surrogate-filled munitions.

  8. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features

    NASA Astrophysics Data System (ADS)

    Li, Bangyu; Zhang, Hui; Xu, Fanjiang

    2014-03-01

    This paper addresses the problem of water extraction from high resolution remote sensing images (including R, G, B, and NIR channels), which draws considerable attention in recent years. Previous work on water extraction mainly faced two difficulties. 1) It is difficult to obtain accurate position of water boundary because of using low resolution images. 2) Like all other image based object classification problems, the phenomena of "different objects same image" or "different images same object" affects the water extraction. Shadow of elevated objects (e.g. buildings, bridges, towers and trees) scattered in the remote sensing image is a typical noise objects for water extraction. In many cases, it is difficult to discriminate between water and shadow in a remote sensing image, especially in the urban region. We propose a water extraction method with two hierarchies: the statistical feature of spectral characteristic based on image segmentation and the shape feature based on shadow removing. In the first hierarchy, the Statistical Region Merging (SRM) algorithm is adopted for image segmentation. The SRM includes two key steps: one is sorting adjacent regions according to a pre-ascertained sort function, and the other one is merging adjacent regions based on a pre-ascertained merging predicate. The sort step is done one time during the whole processing without considering changes caused by merging which may cause imprecise results. Therefore, we modify the SRM with dynamic sort processing, which conducts sorting step repetitively when there is large adjacent region changes after doing merging. To achieve robust segmentation, we apply the merging region with six features (four remote sensing image bands, Normalized Difference Water Index (NDWI), and Normalized Saturation-value Difference Index (NSVDI)). All these features contribute to segment image into region of object. NDWI and NSVDI are discriminate between water and some shadows. In the second hierarchy, we adopt

  9. High Resolution Urban Feature Extraction for Global Population Mapping using High Performance Computing

    SciTech Connect

    Vijayaraj, Veeraraghavan; Bright, Eddie A; Bhaduri, Budhendra L

    2007-01-01

    The advent of high spatial resolution satellite imagery like Quick Bird (0.6 meter) and IKONOS (1 meter) has provided a new data source for high resolution urban land cover mapping. Extracting accurate urban regions from high resolution images has many applications and is essential to the population mapping efforts of Oak Ridge National Laboratory's (ORNL) LandScan population distribution program. This paper discusses an automated parallel algorithm that has been implemented on a high performance computing environment to extract urban regions from high resolution images using texture and spectral features

  10. Human action classification using adaptive key frame interval for feature extraction

    NASA Astrophysics Data System (ADS)

    Lertniphonphan, Kanokphan; Aramvith, Supavadee; Chalidabhongse, Thanarat H.

    2016-01-01

    Human action classification based on the adaptive key frame interval (AKFI) feature extraction is presented. Since human movement periods are different, the action intervals that contain the intensive and compact motion information are considered in this work. We specify AKFI by analyzing an amount of motion through time. The key frame is defined to be the local minimum interframe motion, which is computed by using frame differencing between consecutive frames. Once key frames are detected, the features within a segmented period are encoded by adaptive motion history image and key pose history image. The action representation consists of the local orientation histogram of the features during AKFI. The experimental results on Weizmann dataset, KTH dataset, and UT Interaction dataset demonstrate that the features can effectively classify action and can classify irregular cases of walking compared to other well-known algorithms.

  11. Extraction of informative cell features by segmentation of densely clustered tissue images.

    PubMed

    Kothari, Sonal; Chaudry, Qaiser; Wang, May D

    2009-01-01

    This paper presents a fast methodology for the estimation of informative cell features from densely clustered RGB tissue images. The features estimated include nuclei count, nuclei size distribution, nuclei eccentricity (roundness) distribution, nuclei closeness distribution and cluster size distribution. Our methodology is a three step technique. Firstly, we generate a binary nuclei mask from an RGB tissue image by color segmentation. Secondly, we segment nuclei clusters present in the binary mask into individual nuclei by concavity detection and ellipse fitting. Finally, we estimate informative features for every nuclei and their distribution for the complete image. The main focus of our work is the development of a fast and accurate nuclei cluster segmentation technique for densely clustered tissue images. We also developed a simple graphical user interface (GUI) for our application which requires minimal user interaction and can efficiently extract features from nuclei clusters, making it feasible for clinical applications (less than 2 minutes for a 1.9 megapixel tissue image).

  12. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals

    NASA Astrophysics Data System (ADS)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L.

    2016-05-01

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  13. Specific Features of Destabilization of the Wave Profile During Reflection of an Intense Acoustic Beam from a Soft Boundary

    NASA Astrophysics Data System (ADS)

    Deryabin, M. S.; Kasyanov, D. A.; Kurin, V. V.; Garasyov, M. A.

    2016-05-01

    We show that a significant energy redistribution occurs in the spectrum of reflected nonlinear waves, when an intense acoustic beam is reflected from an acoustically soft boundary, which manifests itself at short wave distances from a reflecting boundary. This effect leads to the appearance of extrema in the distributions of the amplitude and intensity of the field of the reflected acoustic beam near the reflecting boundary. The results of physical experiments are confirmed by numerical modeling of the process of transformation of nonlinear waves reflected from an acoustically soft boundary. Numerical modeling was performed by means of the Khokhlov—Zabolotskaya—Kuznetsov (KZK) equation.

  14. Impact of postharvest dehydration process of winegrapes on mechanical and acoustic properties of the seeds and their relationship with flavanol extraction during simulated maceration.

    PubMed

    Río Segade, Susana; Torchio, Fabrizio; Gerbi, Vincenzo; Quijada-Morín, Natalia; García-Estévez, Ignacio; Giacosa, Simone; Escribano-Bailón, M Teresa; Rolle, Luca

    2016-05-15

    This study represents the first time that the extraction of phenolic compounds from the seeds is assessed from instrumental texture properties for dehydrated grapes. Nebbiolo winegrapes were postharvest dehydrated at 20°C and 41% relative humidity. During the dehydration process, sampling was performed at 15%, 30%, 45% and 60% weight loss. The extractable fraction and extractability of phenolic compounds from the seeds were determined after simulated maceration. The evolution of mechanical and acoustic attributes of intact seeds was also determined during grape dehydration to evaluate how these changes affected the extraction of phenolic compounds. The extractable content and extractability of monomeric flavanols and proanthocyanidins, as well as the galloylation percentage of flavanols, might be predicted easily and quickly from the mechanical and acoustic properties of intact seeds. This would help in decision-making on the optimal dehydration level of winegrapes and the best management of winemaking of dehydrated grapes.

  15. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  16. Extracting the driving force from ozone data using slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, Geli; Yang, Peicai; Zhou, Xiuji

    2016-05-01

    Slow feature analysis (SFA) is a recommended technique for extracting slowly varying features from a quickly varying signal. In this work, we apply SFA to total ozone data from Arosa, Switzerland. The results show that the signal of volcanic eruptions can be found in the driving force, and wavelet analysis of this driving force shows that there are two main dominant scales, which may be connected with the effect of climate mode such as North Atlantic Oscillation (NAO) and solar activity. The findings of this study represent a contribution to our understanding of the causality from observed climate data.

  17. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    SciTech Connect

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  18. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  19. Feature Extraction of One-step Ahead Daily Maximum Load with Regression Tree

    NASA Astrophysics Data System (ADS)

    Mori, Hiroyuki; Sakatani, Yoshinori; Fujino, Tatsurou; Numa, Kazuyuki

    In this paper, a new efficient feature extraction method is proposed to handle the one-step ahead daily maximum load forecasting. In recent years, power systems become more complicated under the deregulated and competitive environment. As a result, it is not easy to understand the cause and effect of short-term load forecasting with a bunch of data. This paper analyzes load data from a standpoint of data mining. By it we mean a technique that finds out rules or knowledge through large database. As a data mining method for load forecasting, this paper focuses on the regression tree that handles continuous variables and expresses a knowledge rule as if-then rules. Investigating the variable importance of the regression tree gives information on the transition of the load forecasting models. This paper proposes a feature extraction method for examining the variable importance. The proposed method allows to classify the transition of the variable importance through actual data.

  20. BDPCA plus LDA: a novel fast feature extraction technique for face recognition.

    PubMed

    Zuo, Wangmeng; Zhang, David; Yang, Jian; Wang, Kuanquan

    2006-08-01

    Appearance-based methods, especially linear discriminant analysis (LDA), have been very successful in facial feature extraction, but the recognition performance of LDA is often degraded by the so-called "small sample size" (SSS) problem. One popular solution to the SSS problem is principal component analysis (PCA) + LDA (Fisherfaces), but the LDA in other low-dimensional subspaces may be more effective. In this correspondence, we proposed a novel fast feature extraction technique, bidirectional PCA (BDPCA) plus LDA (BDPCA + LDA), which performs an LDA in the BDPCA subspace. Two face databases, the ORL and the Facial Recognition Technology (FERET) databases, are used to evaluate BDPCA + LDA. Experimental results show that BDPCA + LDA needs less computational and memory requirements and has a higher recognition accuracy than PCA + LDA.

  1. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.; Suhardjo; Susanto, Adhi

    2013-04-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  2. A Study of Various Feature Extraction Methods on a Motor Imagery Based Brain Computer Interface System

    PubMed Central

    Resalat, Seyed Navid; Saba, Valiallah

    2016-01-01

    Introduction: Brain Computer Interface (BCI) systems based on Movement Imagination (MI) are widely used in recent decades. Separate feature extraction methods are employed in the MI data sets and classified in Virtual Reality (VR) environments for real-time applications. Methods: This study applied wide variety of features on the recorded data using Linear Discriminant Analysis (LDA) classifier to select the best feature sets in the offline mode. The data set was recorded in 3-class tasks of the left hand, the right hand, and the foot motor imagery. Results: The experimental results showed that Auto-Regressive (AR), Mean Absolute Value (MAV), and Band Power (BP) features have higher accuracy values,75% more than those for the other features. Discussion: These features were selected for the designed real-time navigation. The corresponding results revealed the subject-specific nature of the MI-based BCI system; however, the Power Spectral Density (PSD) based α-BP feature had the highest averaged accuracy. PMID:27303595

  3. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, Paola; Belmont, Patrick; Foufoula-Georgiou, Efi

    2012-03-01

    High-resolution topographic data derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive areas in a way that was previously not possible. Availability of this data provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models, and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topographic data come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. Low-relief landscapes are particularly challenging because topographic gradients are low, and in many places both the landscape and the channel network have been heavily modified by humans. This is especially true for agricultural landscapes, which dominate the midwestern United States. The goal of this work is to address several issues related to feature extraction in flat lands by using GeoNet, a recently developed method based on nonlinear multiscale filtering and geodesic optimization for automatic extraction of geomorphic features (channel heads and channel networks) from high-resolution topographic data. Here we test the ability of GeoNet to extract channel networks in flat and human-impacted landscapes using 3 m lidar data for the Le Sueur River Basin, a 2880 km2 subbasin of the Minnesota River Basin. We propose a curvature analysis to differentiate between channels and manmade structures that are not part of the river network, such as roads and bridges. We document that Laplacian curvature more effectively distinguishes channels in flat, human-impacted landscapes compared with geometric curvature. In addition, we develop a method for performing automated channel morphometric analysis including extraction of cross sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation. Using

  4. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  5. Modular implementation of feature extraction and matching algorithms for photogrammetric stereo imagery

    NASA Astrophysics Data System (ADS)

    Kershaw, James; Hamlyn, Garry

    1994-06-01

    This paper describes the implementation of algorithms for automatically extracting and matching features in stereo pairs of images. The implementation has been designed to be as modular as possible to allow different algorithms for each stage in the matching process to be combined in the most appropriate manner for each particular problem. The modules have been implemented in the AVS environment but are designed to be portable to any platform. This work has been undertaken as part of task DEF 93/1 63 'Intelligence Analysis of Imagery', and forms part of ITD's contribution to the Visual Processing research program in the Centre for Sensor System and Information Processing. A major aim of both the task and the research program is to produce software to assist intelligence analysts in extracting three dimensional shape from imagery: the algorithms and software described here will form the first part of a module for automatically extracting depth information from stereo image pairs.

  6. Belt-oriented RADON transform and its application to extracting features from high-resolution remotely sensed images

    NASA Astrophysics Data System (ADS)

    Wang, Ruifu; Zhang, Jie; Huang, Jianbo; Chen, Miao; Leng, Xiuhua

    2003-05-01

    Theory of belt-oriented RADON transform is developed, and applied to extracting features from high-resolution remote sense images. By several typical experiments it is proved that belt-oriented Radon transform is powerful in extracting belt features, but Radon transform (line-oriented Radon transform called by author) is unavailable.

  7. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    PubMed

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  8. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  9. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  10. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  11. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  12. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  13. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  14. Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua

    2013-01-01

    Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis.

  15. Feature extraction of kernel regress reconstruction for fault diagnosis based on self-organizing manifold learning

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Xu, Guanghua; Liu, Dan

    2013-09-01

    The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension. Currently, nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings, such as manifold learning. However, these methods are all based on manual intervention, which have some shortages in stability, and suppressing the disturbance noise. To extract features automatically, a manifold learning method with self-organization mapping is introduced for the first time. Under the non-uniform sample distribution reconstructed by the phase space, the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation. Finally, the signal is reconstructed by the kernel regression. Several typical states include the Lorenz system, engine fault with piston pin defect, and bearing fault with outer-race defect are analyzed. Compared with the LTSA and continuous wavelet transform, the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified. A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.

  16. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  17. Adaptive feature extraction techniques for subpixel target detections in hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Yuen, Peter W. T.; Bishop, Gary J.

    2004-12-01

    Most target detection algorithms employed in hyperspectral remote sensing rely on a measurable difference between the spectral signature of the target and background. Matched filter techniques which utilise a set of library spectra as filter for target detection are often found to be unsatisfactory because of material variability and atmospheric effects in the field data. The aim of this paper is to report an algorithm which extracts features directly from the scene to act as matched filters for target detection. Methods based upon spectral unmixing using geometric simplex volume maximisation (SVM) and independent component analysis (ICA) were employed to generate features of the scene. Target and background like features are then differentiated, and automatically selected, from the endmember set of the unmixed result according to their statistics. Anomalies are then detected from the selected endmember set and their corresponding spectral characteristics are subsequently extracted from the scene, serving as a bank of matched filters for detection. This method, given the acronym SAFED, has a number of advantages for target detection, compared to previous techniques which use orthogonal subspace of the background feature. This paper reports the detection capability of this new technique by using an example simulated hyperspectral scene. Similar results using hyperspectral military data show high detection accuracy with negligible false alarms. Further potential applications of this technique for false alarm rate (FAR) reduction via multiple approach fusion (MAF), and, as a means for thresholding the anomaly detection technique, are outlined.

  18. DBSCAN-based ROI extracted from SAR images and the discrimination of multi-feature ROI

    NASA Astrophysics Data System (ADS)

    He, Xin Yi; Zhao, Bo; Tan, Shu Run; Zhou, Xiao Yang; Jiang, Zhong Jin; Cui, Tie Jun

    2009-10-01

    The purpose of the paper is to extract the region of interest (ROI) from the coarse detected synthetic aperture radar (SAR) images and discriminate if the ROI contains a target or not, so as to eliminate the false alarm, and prepare for the target recognition. The automatic target clustering is one of the most difficult tasks in the SAR-image automatic target recognition system. The density-based spatial clustering of applications with noise (DBSCAN) relies on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN was first used in the SAR image processing, which has many excellent features: only two insensitivity parameters (radius of neighborhood and minimum number of points) are needed; clusters of arbitrary shapes which fit in with the coarse detected SAR images can be discovered; and the calculation time and memory can be reduced. In the multi-feature ROI discrimination scheme, we extract several target features which contain the geometry features such as the area discriminator and Radon-transform based target profile discriminator, the distribution characteristics such as the EFF discriminator, and the EM scattering property such as the PPR discriminator. The synthesized judgment effectively eliminates the false alarms.

  19. A Novel Approach Based on Data Redundancy for Feature Extraction of EEG Signals.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Kamel, Nidal; Hussain, Muhammad

    2016-03-01

    Feature extraction and classification for electroencephalogram (EEG) in medical applications is a challenging task. The EEG signals produce a huge amount of redundant data or repeating information. This redundancy causes potential hurdles in EEG analysis. Hence, we propose to use this redundant information of EEG as a feature to discriminate and classify different EEG datasets. In this study, we have proposed a JPEG2000 based approach for computing data redundancy from multi-channels EEG signals and have used the redundancy as a feature for classification of EEG signals by applying support vector machine, multi-layer perceptron and k-nearest neighbors classifiers. The approach is validated on three EEG datasets and achieved high accuracy rate (95-99 %) in the classification. Dataset-1 includes the EEG signals recorded during fluid intelligence test, dataset-2 consists of EEG signals recorded during memory recall test, and dataset-3 has epileptic seizure and non-seizure EEG. The findings demonstrate that the approach has the ability to extract robust feature and classify the EEG signals in various applications including clinical as well as normal EEG patterns.

  20. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  1. Extracting features buried within high density atom probe point cloud data through simplicial homology.

    PubMed

    Srinivasan, Srikant; Kaluskar, Kaustubh; Broderick, Scott; Rajan, Krishna

    2015-12-01

    Feature extraction from Atom Probe Tomography (APT) data is usually performed by repeatedly delineating iso-concentration surfaces of a chemical component of the sample material at different values of concentration threshold, until the user visually determines a satisfactory result in line with prior knowledge. However, this approach allows for important features, buried within the sample, to be visually obscured by the high density and volume (~10(7) atoms) of APT data. This work provides a data driven methodology to objectively determine the appropriate concentration threshold for classifying different phases, such as precipitates, by mapping the topology of the APT data set using a concept from algebraic topology termed persistent simplicial homology. A case study of Sc precipitates in an Al-Mg-Sc alloy is presented demonstrating the power of this technique to capture features, such as precise demarcation of Sc clusters and Al segregation at the cluster boundaries, not easily available by routine visual adjustment.

  2. Nonlinear feature extraction using kernel principal component analysis with non-negative pre-image.

    PubMed

    Kallas, Maya; Honeine, Paul; Richard, Cedric; Amoud, Hassan; Francis, Clovis

    2010-01-01

    The inherent physical characteristics of many real-life phenomena, including biological and physiological aspects, require adapted nonlinear tools. Moreover, the additive nature in some situations involve solutions expressed as positive combinations of data. In this paper, we propose a nonlinear feature extraction method, with a non-negativity constraint. To this end, the kernel principal component analysis is considered to define the most relevant features in the reproducing kernel Hilbert space. These features are the nonlinear principal components with high-order correlations between input variables. A pre-image technique is required to get back to the input space. With a non-negative constraint, we show that one can solve the pre-image problem efficiently, using a simple iterative scheme. Furthermore, the constrained solution contributes to the stability of the algorithm. Experimental results on event-related potentials (ERP) illustrate the efficiency of the proposed method.

  3. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models

    PubMed Central

    Chaddad, Ahmad

    2015-01-01

    This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. PMID:26136774

  4. Non-linear feature extraction from HRV signal for mortality prediction of ICU cardiovascular patient.

    PubMed

    Karimi Moridani, Mohammad; Setarehdan, Seyed Kamaledin; Motie Nasrabadi, Ali; Hajinasrollah, Esmaeil

    2016-01-01

    Intensive care unit (ICU) patients are at risk of in-ICU morbidities and mortality, making specific systems for identifying at-risk patients a necessity for improving clinical care. This study presents a new method for predicting in-hospital mortality using heart rate variability (HRV) collected from the times of a patient's ICU stay. In this paper, a HRV time series processing based method is proposed for mortality prediction of ICU cardiovascular patients. HRV signals were obtained measuring R-R time intervals. A novel method, named return map, is then developed that reveals useful information from the HRV time series. This study also proposed several features that can be extracted from the return map, including the angle between two vectors, the area of triangles formed by successive points, shortest distance to 45° line and their various combinations. Finally, a thresholding technique is proposed to extract the risk period and to predict mortality. The data used to evaluate the proposed algorithm obtained from 80 cardiovascular ICU patients, from the first 48 h of the first ICU stay of 40 males and 40 females. This study showed that the angle feature has on average a sensitivity of 87.5% (with 12 false alarms), the area feature has on average a sensitivity of 89.58% (with 10 false alarms), the shortest distance feature has on average a sensitivity of 85.42% (with 14 false alarms) and, finally, the combined feature has on average a sensitivity of 92.71% (with seven false alarms). The results showed that the last half an hour before the patient's death is very informative for diagnosing the patient's condition and to save his/her life. These results confirm that it is possible to predict mortality based on the features introduced in this paper, relying on the variations of the HRV dynamic characteristics.

  5. Extraction of spatial features in hyperspectral images based on the analysis of differential attribute profiles

    NASA Astrophysics Data System (ADS)

    Falco, Nicola; Benediktsson, Jon A.; Bruzzone, Lorenzo

    2013-10-01

    The new generation of hyperspectral sensors can provide images with a high spectral and spatial resolution. Recent improvements in mathematical morphology have developed new techniques such as the Attribute Profiles (APs) and the Extended Attribute Profiles (EAPs) that can effectively model the spatial information in remote sensing images. The main drawbacks of these techniques is the selection of the optimal range of values related to the family of criteria adopted to each filter step, and the high dimensionality of the profiles, which results in a very large number of features and therefore provoking the Hughes phenomenon. In this work, we focus on addressing the dimensionality issue, which leads to an highly intrinsic information redundancy, proposing a novel strategy for extracting spatial information from hyperspectral images based on the analysis of the Differential Attribute Profiles (DAPs). A DAP is generated by computing the derivative of the AP; it shows at each level the residual between two adjacent levels of the AP. By analyzing the multilevel behavior of the DAP, it is possible to extract geometrical features corresponding to the structures within the scene at different scales. Our proposed approach consists of two steps: 1) a homogeneity measurement is used to identify the level L in which a given pixel belongs to a region with a physical meaning; 2) the geometrical information of the extracted regions is fused into a single map considering their level L previously identified. The process is repeated for different attributes building a reduced EAP, whose dimensionality is much lower with respect to the original EAP ones. Experiments carried out on the hyperspectral data set of Pavia University area show the effectiveness of the proposed method in extracting spatial features related to the physical structures presented in the scene, achieving higher classification accuracy with respect to the ones reported in the state-of-the-art literature

  6. Fault feature extraction and enhancement of rolling element bearing in varying speed condition

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Zhang, W.; Qin, Z. Y.; Chu, F. L.

    2016-08-01

    In engineering applications, the variability of load usually varies the shaft speed, which further degrades the efficacy of the diagnostic method based on the hypothesis of constant speed analysis. Therefore, the investigation of the diagnostic method suitable for the varying speed condition is significant for the bearing fault diagnosis. In this instance, a novel fault feature extraction and enhancement procedure was proposed by the combination of the iterative envelope analysis and a low pass filtering operation in this paper. At first, based on the analytical model of the collected vibration signal, the envelope signal was theoretically calculated and the iterative envelope analysis was improved for the varying speed condition. Then, a feature enhancement procedure was performed by applying a low pass filter on the temporal envelope obtained by the iterative envelope analysis. Finally, the temporal envelope signal was transformed to the angular domain by the computed order tracking and the fault feature was extracted on the squared envelope spectrum. Simulations and experiments were used to validate the efficacy of the theoretical analysis and proposed procedure. It is shown that the computed order tracking method is recommended to be applied on the envelope of the signal in order to avoid the energy spreading and amplitude distortion. Compared with the feature enhancement method performed by the fast kurtogram and corresponding optimal band pass filtering, the proposed method can efficiently extract the fault character in the varying speed condition with less amplitude attenuation. Furthermore, do not involve the center frequency estimation, the proposed method is more concise for engineering applications.

  7. Time series analysis and feature extraction techniques for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Overbey, Lucas A.

    Recently, advances in sensing and sensing methodologies have led to the deployment of multiple sensor arrays on structures for structural health monitoring (SHM) applications. Appropriate feature extraction, detection, and classification methods based on measurements obtained from these sensor networks are vital to the SHM paradigm. This dissertation focuses on a multi-input/multi-output approach to novel data processing procedures to produce detailed information about the integrity of a structure in near real-time. The studies employ nonlinear time series analysis techniques to extract three different types of features for damage diagnostics: namely, nonlinear prediction error, transfer entropy, and the generalized interdependence. These features form reliable measures of generalized correlations between multiple measurements to capture aspects of the dynamics related to the presence of damage. Several analyses are conducted on each of these features. Specifically, variations of nonlinear prediction error are introduced, analyzed, and validated, including the use of a stochastic excitation to augment generality, introduction of local state-space models for sensitivity enhancement, and the employment of comparisons between multiple measurements for localization capability. A modification and enhancement to transfer entropy is created and validated for improved sensitivity. In addition, a thorough analysis of the effects of variability to transfer entropy estimation is made. The generalized interdependence is introduced into the literature and validated as an effective measure of damage presence, extent, and location. These features are validated on a multi-degree-of-freedom dynamic oscillator and several different frame experiments. The evaluated features are then fed into four different classification schemes to obtain a concurrent set of outputs that categorize the integrity of the structure, e.g. the presence, extent, location, and type of damage, taking

  8. An improved interactive segmentation method for extracting the edge features of femur digital radiographs

    NASA Astrophysics Data System (ADS)

    Sun, Shaobin; Zhang, Bin; Meng, Shang; Liu, Dan; Sun, Jinwei

    2012-01-01

    By comparing the advantages and disadvantages of two interactive image segmentation algorithms: level set and live wire, we propose a improved multi-step realization method of interactive image segmentation, which could help the operators to extract the important anatomical structure features from the femur digital radiographs (DR) images more accurately. Firstly, a preprocessing step including median filtering and image enhancement was made to eliminate the noise during the DR imaging; Secondly, with the advantages of level set such as simple operation and fast convergence rate, the coarse outline contour extraction was realized; Finally, with the advantages of live-wire such as repeated local operation and high precision, the fine contour extraction of special anatomic areas, the profile of fracture edge and the overlapping area was realized. So, all the interesting anatomical structure features of DR images were obtained. In this paper, our method was applied to the complete femur DR images and artificial fracture femur DR images. The segmentation result shows that our method has a good performance in accuracy and efficiency.

  9. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  10. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  11. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  12. Transmission Characteristics of Primate Vocalizations: Implications for Acoustic Analyses

    PubMed Central

    Maciej, Peter; Fischer, Julia; Hammerschmidt, Kurt

    2011-01-01

    Acoustic analyses have become a staple method in field studies of animal vocal communication, with nearly all investigations using computer-based approaches to extract specific features from sounds. Various algorithms can be used to extract acoustic variables that may then be related to variables such as individual identity, context or reproductive state. Habitat structure and recording conditions, however, have strong effects on the acoustic structure of sound signals. The purpose of this study was to identify which acoustic parameters reliably describe features of propagated sounds. We conducted broadcast experiments and examined the influence of habitat type, transmission height, and re-recording distance on the validity (deviation from the original sound) and reliability (variation within identical recording conditions) of acoustic features of different primate call types. Validity and reliability varied independently of each other in relation to habitat, transmission height, and re-recording distance, and depended strongly on the call type. The smallest deviations from the original sounds were obtained by a visually-controlled calculation of the fundamental frequency. Start- and end parameters of a sound were most susceptible to degradation in the environment. Because the recording conditions can have appreciable effects on acoustic parameters, it is advisable to validate the extraction method of acoustic variables from recordings over longer distances before using them in acoustic analyses. PMID:21829682

  13. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  14. Lumbar Ultrasound Image Feature Extraction and Classification with Support Vector Machine.

    PubMed

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2015-10-01

    Needle entry site localization remains a challenge for procedures that involve lumbar puncture, for example, epidural anesthesia. To solve the problem, we have developed an image classification algorithm that can automatically identify the bone/interspinous region for ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. The proposed algorithm consists of feature extraction, feature selection and machine learning procedures. A set of features, including matching values, positions and the appearance of black pixels within pre-defined windows along the midline, were extracted from the ultrasound images using template matching and midline detection methods. A support vector machine was then used to classify the bone images and interspinous images. The support vector machine model was trained with 1,040 images from 26 pregnant subjects and tested on 800 images from a separate set of 20 pregnant patients. A success rate of 95.0% on training set and 93.2% on test set was achieved with the proposed method. The trained support vector machine model was further tested on 46 off-line collected videos, and successfully identified the proper needle insertion site (interspinous region) in 45 of the cases. Therefore, the proposed method is able to process the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work of identifying the needle entry site.

  15. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    NASA Astrophysics Data System (ADS)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  16. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  17. Using GNG to improve 3D feature extraction--application to 6DoF egomotion.

    PubMed

    Viejo, Diego; Garcia, Jose; Cazorla, Miguel; Gil, David; Johnsson, Magnus

    2012-08-01

    Several recent works deal with 3D data in mobile robotic problems, e.g. mapping or egomotion. Data comes from any kind of sensor such as stereo vision systems, time of flight cameras or 3D lasers, providing a huge amount of unorganized 3D data. In this paper, we describe an efficient method to build complete 3D models from a Growing Neural Gas (GNG). The GNG is applied to the 3D raw data and it reduces both the subjacent error and the number of points, keeping the topology of the 3D data. The GNG output is then used in a 3D feature extraction method. We have performed a deep study in which we quantitatively show that the use of GNG improves the 3D feature extraction method. We also show that our method can be applied to any kind of 3D data. The 3D features obtained are used as input in an Iterative Closest Point (ICP)-like method to compute the 6DoF movement performed by a mobile robot. A comparison with standard ICP is performed, showing that the use of GNG improves the results. Final results of 3D mapping from the egomotion calculated are also shown. PMID:22386789

  18. Comparative assessment of feature extraction methods for visual odometry in wireless capsule endoscopy.

    PubMed

    Spyrou, Evaggelos; Iakovidis, Dimitris K; Niafas, Stavros; Koulaouzidis, Anastasios

    2015-10-01

    Wireless capsule endoscopy (WCE) enables the non-invasive examination of the gastrointestinal (GI) tract by a swallowable device equipped with a miniature camera. Accurate localization of the capsule in the GI tract enables accurate localization of abnormalities for medical interventions such as biopsy and polyp resection; therefore, the optimization of the localization outcome is important. Current approaches to endoscopic capsule localization are mainly based on external sensors and transit time estimations. Recently, we demonstrated the feasibility of capsule localization based-entirely-on visual features, without the use of external sensors. This technique relies on a motion estimation algorithm that enables measurements of the distance and the rotation of the capsule from the acquired video frames. Towards the determination of an optimal visual feature extraction technique for capsule motion estimation, an extensive comparative assessment of several state-of-the-art techniques, using a publicly available dataset, is presented. The results show that the minimization of the localization error is possible at the cost of computational efficiency. A localization error of approximately one order of magnitude higher than the minimal one can be considered as compromise for the use of current computationally efficient feature extraction techniques. PMID:26073184

  19. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  20. Ion acoustic solitary waves and double layers in a plasma with two temperature electrons featuring Tsallis distribution

    SciTech Connect

    Shalini, Saini, N. S.

    2014-10-15

    The propagation properties of large amplitude ion acoustic solitary waves (IASWs) are studied in a plasma containing cold fluid ions and multi-temperature electrons (cool and hot electrons) with nonextensive distribution. Employing Sagdeev pseudopotential method, an energy balance equation has been derived and from the expression for Sagdeev potential function, ion acoustic solitary waves and double layers are investigated numerically. The Mach number (lower and upper limits) for the existence of solitary structures is determined. Positive as well as negative polarity solitary structures are observed. Further, conditions for the existence of ion acoustic double layers (IADLs) are also determined numerically in the form of the critical values of q{sub c}, f and the Mach number (M). It is observed that the nonextensivity of electrons (via q{sub c,h}), concentration of electrons (via f) and temperature ratio of cold to hot electrons (via β) significantly influence the characteristics of ion acoustic solitary waves as well as double layers.

  1. System and method for investigating sub-surface features of a rock formation with acoustic sources generating conical broadcast signals

    SciTech Connect

    Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt; Johnson, Paul A.; Guyer, Robert; Ten Cate, James A.; Le Bas, Pierre -Yves; Larmat, Carene S.

    2015-08-18

    A method of interrogating a formation includes generating a conical acoustic signal, at a first frequency--a second conical acoustic signal at a second frequency each in the between approximately 500 Hz and 500 kHz such that the signals intersect in a desired intersection volume outside the borehole. The method further includes receiving, a difference signal returning to the borehole resulting from a non-linear mixing of the signals in a mixing zone within the intersection volume.

  2. Weak fault feature extraction of rolling bearing based on cyclic Wiener filter and envelope spectrum

    NASA Astrophysics Data System (ADS)

    Ming, Yang; Chen, Jin; Dong, Guangming

    2011-07-01

    In vibration analysis, weak fault feature extraction under strong background noise is of great importance. A method based on cyclic Wiener filter and envelope spectrum analysis is proposed. Cyclic Wiener filter exploits the spectral coherence theory induced by the second-order cyclostationary signal. The original signal is duplicated and shifted in the frequency domain by amounts corresponding to the cyclic frequencies. The noise component is optimally filtered by a filter-bank. The filtered signal is analyzed by performing envelope spectrum. In the envelope spectrum, characteristic frequencies are quite clear. Then the most impactive part is effectively extracted for further fault diagnosis. The effectiveness of the method is demonstrated on both simulated signal and actual data from rolling bearing accelerated life test.

  3. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  4. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  5. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  6. The research on recognition and extraction of river feature in IKNOS based on frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Feng, Xuezhi; Xiao, Pengfeng; Wu, Guoping

    2009-10-01

    Because the resolution of remotely sensed imagery becomes higher, new methods are introduced to process the high-resolution remotely sensed imagery. The algorithms introduced in this paper to recognize and extract the river features based on the frequency domain. This paper uses the Gabor filter in frequency domain to enhance the texture of river and remove the noise from remotely sensed imagery. And then according to the theory of phase congruency, this paper retrieves the PC of every point such that features such as edge of river, building and farmland in the remotely sensed imagery. Lastly, the skeletal methodology is introduced to determine the edge of river within the help of the trend of river.

  7. A mixture of physicochemical and evolutionary-based feature extraction approaches for protein fold recognition.

    PubMed

    Dehzangi, Abdollah; Sharma, Alok; Lyons, James; Paliwal, Kuldip K; Sattar, Abdul

    2015-01-01

    Recent advancement in the pattern recognition field stimulates enormous interest in Protein Fold Recognition (PFR). PFR is considered as a crucial step towards protein structure prediction and drug design. Despite all the recent achievements, the PFR still remains as an unsolved issue in biological science and its prediction accuracy still remains unsatisfactory. Furthermore, the impact of using a wide range of physicochemical-based attributes on the PFR has not been adequately explored. In this study, we propose a novel mixture of physicochemical and evolutionary-based feature extraction methods based on the concepts of segmented distribution and density. We also explore the impact of 55 different physicochemical-based attributes on the PFR. Our results show that by providing more local discriminatory information as well as obtaining benefit from both physicochemical and evolutionary-based features simultaneously, we can enhance the protein fold prediction accuracy up to 5% better than previously reported results found in the literature.

  8. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    PubMed

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  9. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  10. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  11. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  12. Lamb wave feature extraction using discrete wavelet transformation and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Ghodsi, Mojtaba; Ziaiefar, Hamidreza; Amiryan, Milad; Honarvar, Farhang; Hojjat, Yousef; Mahmoudi, Mehdi; Al-Yahmadi, Amur; Bahadur, Issam

    2016-04-01

    In this research, a new method is presented for eliciting the proper features for recognizing and classifying the kinds of the defects by guided ultrasonic waves. After applying suitable preprocessing, the suggested method extracts the base frequency band from the received signals by discrete wavelet transform and discrete Fourier transform. This frequency band can be used as a distinctive feature of ultrasonic signals in different defects. Principal Component Analysis with improving this feature and decreasing extra data managed to improve classification. In this study, ultrasonic test with A0 mode lamb wave is used and is appropriated to reduce the difficulties around the problem. The defects under analysis included corrosion, crack and local thickness reduction. The last defect is caused by electro discharge machining (EDM). The results of the classification by optimized Neural Network depicts that the presented method can differentiate different defects with 95% precision and thus, it is a strong and efficient method. Moreover, comparing the elicited features for corrosion and local thickness reduction and also the results of the two's classification clarifies that modeling the corrosion procedure by local thickness reduction which was previously common, is not an appropriate method and the signals received from the two defects are different from each other.

  13. Learning object location predictors with boosting and grammar-guided feature extraction

    SciTech Connect

    Eads, Damian Ryan; Rosten, Edward; Helmbold, David

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  14. Extraction of pulse repetition intervals from sperm whale click trains for ocean acoustic data mining.

    PubMed

    Zaugg, Serge; van der Schaar, Mike; Houégnigan, Ludwig; André, Michel

    2013-02-01

    The analysis of acoustic data from the ocean is a valuable tool to study free ranging cetaceans and anthropogenic noise. Due to the typically large volume of acquired data, there is a demand for automated analysis techniques. Many cetaceans produce acoustic pulses (echolocation clicks) with a pulse repetition interval (PRI) remaining nearly constant over several pulses. Analyzing these pulse trains is challenging because they are often interleaved. This article presents an algorithm that estimates a pulse's PRI with respect to neighboring pulses. It includes a deinterleaving step that operates via a spectral dissimilarity metric. The sperm whale (SW) produces trains with PRIs between 0.5 and 2 s. As a validation, the algorithm was used for the PRI-based identification of SW click trains with data from the NEMO-ONDE observatory that contained other pulsed sounds, mainly from ship propellers. Separation of files containing SW clicks with a medium and high signal to noise ratio from files containing other pulsed sounds gave an area under the receiver operating characteristic curve value of 0.96. This study demonstrates that PRI can be used for the automated identification of SW clicks and that deinterleaving via spectral dissimilarity contributes to algorithm performance.

  15. Extraction of Stoneley and acoustic Rayleigh waves from ambient noise on ocean bottom observations

    NASA Astrophysics Data System (ADS)

    Tonegawa, T.; Fukao, Y.; Takahashi, T.; Obana, K.; Kodaira, S.; Kaneda, Y.

    2013-12-01

    In the interferometry, the wavefield propagating between two positions can be retrieved by correlating ambient noise recorded on the two positions. This approach is useful for applying to various kinds of wavefield, such as ultrasonic, acoustic (ocean acoustic), and also seismology. Off the Kii Peninsula, Japan, more than 150 short-period (4.5 Hz) seismometers, in which hydrophone is also cosited, had been deployed for ~2 months on 2012 by Japan Agency for Marine-Earth Science and Technology (JAMSTEC) as a part of 'Research concerning Interaction Between the Tokai, Tonankai and Nankai Earthquakes' funded by Ministry of Education, Culture, Sports, Science and Technology, Japan. In this study, correlating ambient noise recorded on the sensors and hydrophones, we attempt to investigate characteristics of wavefield relative to the ocean, sediment, and solid-fluid boundary. The observation period is from Sep. 2012 to Dec. 2012. Station spacing is around 5 km. For 5 lines off the Kii Peninsula, the 30-40 seismometers are distributed at each line. Sampling interval is 200 Hz for both seismometer and hydrophone. The vertical component is just used in this study for correlation analysis. The instruments are located at 100-4800 m in water depth. In the processing for the both records, we applied a bandpass filter of 1-3 Hz, replaced the amplitude to zero if it exceeds a value that was set in this study, and took one-bit normalization. We calculated cross-correlation function (CCF) by using continuous records with a time length of 600 s, stacked the CCFs over the whole observation period. As a result of the analysis for hydrophone, a strong peak can be seen in the CCF for pairs of stations where the separation distance is ~5 km. Although the peak emerges in the CCFs for the separation distance up to 10 km, it disappears in the case that two stations are greater than 15 km separated. As a next approach, along a line off the Kii Peninsula, we aligned CCFs for two stations with

  16. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    SciTech Connect

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process. The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.

  17. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  18. Multiple feature extraction and classification of electroencephalograph signal for Alzheimers' with spectrum and bispectrum

    NASA Astrophysics Data System (ADS)

    Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile

    2015-01-01

    In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of

  19. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    PubMed Central

    Iannelli, Gianni Cristian; Lisini, Gianni; Dell'Acqua, Fabio; Feitosa, Raul Queiroz; da Costa, Gilson Alexandre Ostwald Pedro; Gamba, Paolo

    2014-01-01

    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. PMID:25271564

  20. Fault feature extraction of rolling bearing based on an improved cyclical spectrum density method

    NASA Astrophysics Data System (ADS)

    Li, Min; Yang, Jianhong; Wang, Xiaojing

    2015-11-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method, experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  1. A Joint Time-Frequency and Matrix Decomposition Feature Extraction Methodology for Pathological Voice Classification

    NASA Astrophysics Data System (ADS)

    Ghoraani, Behnaz; Krishnan, Sridhar

    2009-12-01

    The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.

  2. A comparison of feature extraction methods for Sentinel-1 images: Gabor and Weber transforms

    NASA Astrophysics Data System (ADS)

    Stan, Mihaela; Popescu, Anca; Stoichescu, Dan Alexandru

    2015-10-01

    The purpose of this paper is to compare the performance of two feature extraction methods when applied on high resolution Synthetic Aperture Radar (SAR) images acquired with the new ESA mission SENTINEL-1 (S-1). The feature extraction methods were previously tested on high and very high resolution SAR data (imaged by TerraSAR-X) and had a good performance in discriminating between a relevant numbers of land cover classes (tens of classes). Based on the available spatial resolution (10x10m) of S-1 Interferometric Wide (IW) Ground Range Detected (GRD) images the number of detectable classes is much lower. Moreover, the overall heterogeneity of the images is much lower as compared to the high resolution data, the number of observable details is smaller, and this favors the choice of a smaller window size for the analysis: between 10 and 50 pixels in range and azimuth. The size of the analysis window ensures the consistency with the previous results reported in the literature in very high resolution data (as the size on the ground is comparable and thus the number of contributing objects in the window is similar). The performance of Gabor filters and the Weber Local Descriptor (WLD) was investigated in a twofold approach: first the descriptors were computed directly over the IW GRD images and secondly on the sub-sampled version of the same data (in order to determine the effect of the speckle correlation on the overall class detection probability).

  3. Feature extraction and classification for ultrasound images of lumbar spine with support vector machine.

    PubMed

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2014-01-01

    In this paper, we proposed a feature extraction and machine learning method for the classification of ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. A group of features, including matching values and positions, appearance of black pixels within predefined windows along the midline, are extracted from the ultrasound images using template matching and midline detection. Support vector machine (SVM) with Gaussian kernel is utilized to classify the bone images and interspinous images with optimal separation hyperplane. The SVM is trained with 800 images from 20 pregnant subjects and tested with 640 images from a separate set of 16 pregnant patients. A high success rate (97.25% on training set and 95.00% on test set) is achieved with the proposed method. The trained SVM model is further tested on 36 videos collected from 36 pregnant subjects and successfully identified the proper needle insertion site (interspinous region) on all of the cases. Therefore, the proposed method is able to identify the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work to identify the needle insertion point precisely and effectively.

  4. Feature selection for the identification of antitumor compounds in the alcohol total extracts of Curcuma longa.

    PubMed

    Jiang, Jian-Lan; Li, Zi-Dan; Zhang, Huan; Li, Yan; Zhang, Xiao-Hang; Yuan, Yi-fu; Yuan, Ying-jin

    2014-08-01

    Antitumor activity has been reported for turmeric, the dried rhizome of Curcuma longa. This study proposes a new feature selection method for the identification of the antitumor compounds in turmeric total extracts. The chemical composition of turmeric total extracts was analyzed by gas chromatography-mass spectrometry (21 ingredients) and high-performance liquid chromatography-mass spectrometry (22 ingredients), and their cytotoxicity was detected through an 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay against HeLa cells. A support vector machine for regression and a generalized regression neural network were used to research the composition-activity relationship and were later combined with the mean impact value to identify the antitumor compounds. The results showed that six volatile constituents (three terpenes and three ketones) and seven nonvolatile constituents (five curcuminoids and two unknown ingredients) with high absolute mean impact values exhibited a significant correlation with the cytotoxicity against HeLa cells. With the exception of the two unknown ingredients, the identified 11 constituents have been reported to exhibit cytotoxicity. This finding indicates that the feature selection method may be a supplementary tool for the identification of active compounds from herbs.

  5. Feature extraction and object recognition in multi-modal forward looking imagery

    NASA Astrophysics Data System (ADS)

    Greenwood, G.; Blakely, S.; Schartman, D.; Calhoun, B.; Keller, J. M.; Ton, T.; Wong, D.; Soumekh, M.

    2010-04-01

    The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) recently tested an explosive-hazards detection vehicle that combines a pulsed FLGPR with a visible-spectrum color camera. Additionally, NVESD tested a human-in-the-loop multi-camera system with the same goal in mind. It contains wide field-of-view color and infrared cameras as well as zoomable narrow field-of-view versions of those modalities. Even though they are separate vehicles, having information from both systems offers great potential for information fusion. Based on previous work at the University of Missouri, we are not only able to register the UTM-based positions of the FLGPR to the color image sequences on the first system, but we can register these locations to corresponding image frames of all sensors on the human-in-the-loop platform. This paper presents our approach to first generate libraries of multi-sensor information across these platforms. Subsequently, research is performed in feature extraction and recognition algorithms based on the multi-sensor signatures. Our goal is to tailor specific algorithms to recognize and eliminate different categories of clutter and to be able to identify particular explosive hazards. We demonstrate our library creation, feature extraction and object recognition results on a large data collection at a US Army test site.

  6. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  7. Nonlinear and nonstationary framework for feature extraction and classification of motor imagery.

    PubMed

    Trad, Dalila; Al-ani, Tarik; Monacelli, Eric; Jemni, Mohamed

    2011-01-01

    In this work we investigate a nonlinear approach for feature extraction of Electroencephalogram (EEG) signals in order to classify motor imagery for Brain Computer Interface (BCI). This approach is based on the Empirical Mode Decomposition (EMD) and band power (BP). The EMD method is a data-driven technique to analyze non-stationary and nonlinear signals. It generates a set of stationary time series called Intrinsic Mode Functions (IMF) to represent the original data. These IMFs are analyzed with the power spectral density (PSD) to study the active frequency range correspond to the motor imagery for each subject. Then, the band power is computed within a certain frequency range in the channels. Finally, the data is reconstructed with only the specific IMFs and then the band power is employed on the new database. The classification of motor imagery was performed by using two classifiers, Linear Discriminant Analysis (LDA) and Hidden Markov Models (HMMs). The results obtained show that the EMD method allows the most reliable features to be extracted from EEG and that the classification rate obtained is higher and better than using only the direct BP approach.

  8. A Feature Extraction Method for Vibration Signal of Bearing Incipient Degradation

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli; Guo, Liang; Li, Dan; Wen, Juan

    2016-06-01

    Detection of incipient degradation demands extracting sensitive features accurately when signal-to-noise ratio (SNR) is very poor, which appears in most industrial environments. Vibration signals of rolling bearings are widely used for bearing fault diagnosis. In this paper, we propose a feature extraction method that combines Blind Source Separation (BSS) and Spectral Kurtosis (SK) to separate independent noise sources. Normal, and incipient fault signals from vibration tests of rolling bearings are processed. We studied 16 groups of vibration signals (which all display an increase in kurtosis) of incipient degradation after they are processed by a BSS filter. Compared with conventional kurtosis, theoretical studies of SK trends show that the SK levels vary with frequencies and some experimental studies show that SK trends of measured vibration signals of bearings vary with the amount and level of impulses in both vibration and noise signals due to bearing faults. It is found that the peak values of SK increase when vibration signals of incipient faults are processed by a BSS filter. This pre-processing by a BSS filter makes SK more sensitive to impulses caused by performance degradation of bearings.

  9. Improving the accuracy of feature extraction for flexible endoscope calibration by spatial super resolution.

    PubMed

    Rupp, Stephan; Elter, Matthias; Winter, Christian

    2007-01-01

    Many applications in the domain of medical as well as industrial image processing make considerable use of flexible endoscopes - so called fiberscopes - to gain visual access to holes, hollows, antrums and cavities that are difficult to enter and examine. For a complete exploration and understanding of an antrum, 3d depth information might be desirable or yet necessary. This often requires the mapping of 3d world coordinates to 2d image coordinates which is estimated by camera calibration. In order to retrieve useful results, the precise extraction of the imaged calibration pattern's markers plays a decisive role in the camera calibration process. Unfortunately, when utilizing fiberscopes, the image conductor introduces a disturbing comb structure to the images that anticipates a (precise) marker extraction. Since the calibration quality crucially depends on subpixel-precise calibration marker positions, we apply static comb structure removal algorithms along with a dynamic spatial resolution enhancement method in order to improve the feature extraction accuracy. In our experiments, we demonstrate that our approach results in a more accurate calibration of flexible endoscopes and thus allows for a more precise reconstruction of 3d information from fiberoptic images. PMID:18003530

  10. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    SciTech Connect

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  11. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  12. Protein sequences classification by means of feature extraction with substitution matrices

    PubMed Central

    2010-01-01

    Background This paper deals with the preprocessing of protein sequences for supervised classification. Motif extraction is one way to address that task. It has been largely used to encode biological sequences into feature vectors to enable using well-known machine-learning classifiers which require this format. However, designing a suitable feature space, for a set of proteins, is not a trivial task. For this purpose, we propose a novel encoding method that uses amino-acid substitution matrices to define similarity between motifs during the extraction step. Results In order to demonstrate the efficiency of such approach, we compare several encoding methods using some machine learning classifiers. The experimental results showed that our encoding method outperforms other ones in terms of classification accuracy and number of generated attributes. We also compared the classifiers in term of accuracy. Results indicated that SVM generally outperforms the other classifiers with any encoding method. We showed that SVM, coupled with our encoding method, can be an efficient protein classification system. In addition, we studied the effect of the substitution matrices variation on the quality of our method and hence on the classification quality. We noticed that our method enables good classification accuracies with all the substitution matrices and that the variances of the obtained accuracies using various substitution matrices are slight. However, the number of generated features varies from a substitution matrix to another. Furthermore, the use of already published datasets allowed us to carry out a comparison with several related works. Conclusions The outcomes of our comparative experiments confirm the efficiency of our encoding method to represent protein sequences in classification tasks. PMID:20377887

  13. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  14. Probability-based diagnostic imaging using hybrid features extracted from ultrasonic Lamb wave signals

    NASA Astrophysics Data System (ADS)

    Zhou, Chao; Su, Zhongqing; Cheng, Li

    2011-12-01

    The imaging technique based on guided waves has been a research focus in the field of damage detection over the years, aimed at intuitively highlighting structural damage in two- or three-dimensional images. The accuracy and efficiency of this technique substantially rely on the means of defining the field values at image pixels. In this study, a novel probability-based diagnostic imaging (PDI) approach was developed. Hybrid signal features (including temporal information, intensity of signal energy and signal correlation) were extracted from ultrasonic Lamb wave signals and integrated to retrofit the traditional way of defining field values. To acquire hybrid signal features, an active sensor network in line with pulse-echo and pitch-catch configurations was designed, supplemented with a novel concept of 'virtual sensing'. A hybrid image fusion scheme was developed to enhance the tolerance of the approach to measurement noise/uncertainties and erroneous perceptions from individual sensors. As applications, the approach was employed to identify representative damage scenarios including L-shape through-thickness crack (orientation-specific damage), polygonal damage (multi-edge damage) and multi-damage in structural plates. Results have corroborated that the developed PDI approach based on the use of hybrid signal features is capable of visualizing structural damage quantitatively, regardless of damage shape and number, by highlighting its individual edges in an easily interpretable binary image.

  15. Cerebral Glioma Grading Using Bayesian Network with Features Extracted from Multiple Modalities of Magnetic Resonance Imaging

    PubMed Central

    Wang, Huiting; Liu, Renyuan; Zhang, Xin; Li, Ming; Yang, Yongbo; Yan, Jing; Niu, Fengnan; Tian, Chuanshuai; Wang, Kun; Yu, Haiping; Chen, Weibo; Wan, Suiren; Sun, Yu; Zhang, Bing

    2016-01-01

    Many modalities of magnetic resonance imaging (MRI) have been confirmed to be of great diagnostic value in glioma grading. Contrast enhanced T1-weighted imaging allows the recognition of blood-brain barrier breakdown. Perfusion weighted imaging and MR spectroscopic imaging enable the quantitative measurement of perfusion parameters and metabolic alterations respectively. These modalities can potentially improve the grading process in glioma if combined properly. In this study, Bayesian Network, which is a powerful and flexible method for probabilistic analysis under uncertainty, is used to combine features extracted from contrast enhanced T1-weighted imaging, perfusion weighted imaging and MR spectroscopic imaging. The networks were constructed using K2 algorithm along with manual determination and distribution parameters learned using maximum likelihood estimation. The grading performance was evaluated in a leave-one-out analysis, achieving an overall grading accuracy of 92.86% and an area under the curve of 0.9577 in the receiver operating characteristic analysis given all available features observed in the total 56 patients. Results and discussions show that Bayesian Network is promising in combining features from multiple modalities of MRI for improved grading performance. PMID:27077923

  16. Cerebral Glioma Grading Using Bayesian Network with Features Extracted from Multiple Modalities of Magnetic Resonance Imaging.

    PubMed

    Hu, Jisu; Wu, Wenbo; Zhu, Bin; Wang, Huiting; Liu, Renyuan; Zhang, Xin; Li, Ming; Yang, Yongbo; Yan, Jing; Niu, Fengnan; Tian, Chuanshuai; Wang, Kun; Yu, Haiping; Chen, Weibo; Wan, Suiren; Sun, Yu; Zhang, Bing

    2016-01-01

    Many modalities of magnetic resonance imaging (MRI) have been confirmed to be of great diagnostic value in glioma grading. Contrast enhanced T1-weighted imaging allows the recognition of blood-brain barrier breakdown. Perfusion weighted imaging and MR spectroscopic imaging enable the quantitative measurement of perfusion parameters and metabolic alterations respectively. These modalities can potentially improve the grading process in glioma if combined properly. In this study, Bayesian Network, which is a powerful and flexible method for probabilistic analysis under uncertainty, is used to combine features extracted from contrast enhanced T1-weighted imaging, perfusion weighted imaging and MR spectroscopic imaging. The networks were constructed using K2 algorithm along with manual determination and distribution parameters learned using maximum likelihood estimation. The grading performance was evaluated in a leave-one-out analysis, achieving an overall grading accuracy of 92.86% and an area under the curve of 0.9577 in the receiver operating characteristic analysis given all available features observed in the total 56 patients. Results and discussions show that Bayesian Network is promising in combining features from multiple modalities of MRI for improved grading performance.

  17. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 1: time domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    The paper presents an application of the gamma-absorption method to study a gas-liquid two-phase flow in a horizontal pipeline. In the tests on laboratory installation two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals have been used. The experimental set-up allows recording of stochastic signals, which describe instantaneous content of the stream in the particular cross-section of the flow mixture. The analyses of these signals by statistical methods allow to determine the mean velocity of the gas phase. Meanwhile, the selected features of signals provided by the absorption set, can be applied to recognition of the structure of the flow. In this work such three structures of air-water flow as: plug, bubble, and transitional plug - bubble one were considered. The recorded raw signals were analyzed in time domain and several features were extracted. It was found that following features of signals as the mean, standard deviation, root mean square (RMS), variance and 4th moment are most useful to recognize the structure of the flow.

  18. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    NASA Astrophysics Data System (ADS)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  19. A Distinguishing Arterial Pulse Waves Approach by Using Image Processing and Feature Extraction Technique.

    PubMed

    Chen, Hsing-Chung; Kuo, Shyi-Shiun; Sun, Shen-Ching; Chang, Chia-Hui

    2016-10-01

    Traditional Chinese Medicine (TCM) is based on five main types of diagnoses methods consisting of inspection, auscultation, olfaction, inquiry, and palpation. The most important one is palpation also called pulse diagnosis which is to measure wrist artery pulse by doctor's fingers for detecting patient's health state. In this paper, it is carried out by using a specialized pulse measuring instrument to classify one's pulse type. The measured pulse waves (MPWs) were segmented into the arterial pulse wave curve (APWC) by image proposing method. The slopes and periods among four specific points on the APWC were taken to be the pulse features. Three algorithms are proposed in this paper, which could extract these features from the APWCs and compared their differences between each of them to the average feature matrix, individually. These results show that the method proposed in this study is superior and more accurate than the previous studies. The proposed method could significantly save doctors a large amount of time, increase accuracy and decrease data volume. PMID:27562483

  20. REGARDING THE LINE-OF-SIGHT BARYONIC ACOUSTIC FEATURE IN THE SLOAN DIGITAL SKY SURVEY AND BARYON OSCILLATION SPECTROSCOPIC SURVEY LUMINOUS RED GALAXY SAMPLES

    SciTech Connect

    Kazin, Eyal A.; Blanton, Michael R.; Scoccimarro, Roman; McBride, Cameron K.; Berlind, Andreas A.

    2010-08-20

    We analyze the line-of-sight baryonic acoustic feature in the two-point correlation function {xi} of the Sloan Digital Sky Survey luminous red galaxy (LRG) sample (0.16 < z < 0.47). By defining a narrow line-of-sight region, r{sub p} < 5.5 h {sup -1} Mpc, where r{sub p} is the transverse separation component, we measure a strong excess of clustering at {approx}110 h {sup -1} Mpc, as previously reported in the literature. We also test these results in an alternative coordinate system, by defining the line of sight as {theta} < 3{sup 0}, where {theta} is the opening angle. This clustering excess appears much stronger than the feature in the better-measured monopole. A fiducial {Lambda}CDM nonlinear model in redshift space predicts a much weaker signature. We use realistic mock catalogs to model the expected signal and noise. We find that the line-of-sight measurements can be explained well by our mocks as well as by a featureless {xi} = 0. We conclude that there is no convincing evidence that the strong clustering measurement is the line-of-sight baryonic acoustic feature. We also evaluate how detectable such a signal would be in the upcoming Baryon Oscillation Spectroscopic Survey (BOSS) LRG volume. Mock LRG catalogs (z < 0.6) suggest that (1) the narrow line-of-sight cylinder and cone defined above probably will not reveal a detectable acoustic feature in BOSS; (2) a clustering measurement as high as that in the current sample can be ruled out (or confirmed) at a high confidence level using a BOSS-sized data set; (3) an analysis with wider angular cuts, which provide better signal-to-noise ratios, can nevertheless be used to compare line-of-sight and transverse distances, and thereby constrain the expansion rate H(z) and diameter distance D{sub A}(z).

  1. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots

  2. Extraction of Target Scatterings from Received Transients on Target Detection Trial of Ambient Noise Imaging with Acoustic Lens

    NASA Astrophysics Data System (ADS)

    Mori, Kazuyoshi; Ogasawara, Hanako; Nakamura, Toshiaki; Tsuchiya, Takenobu; Endoh, Nobuyuki

    2012-07-01

    We have already designed and fabricated an aspherical lens with an aperture diameter of 1.0 m to develop a prototype system for ambient noise imaging (ANI). It has also been verified that this acoustic lens realizes a directional resolution, which is a beam width of 1° at the center frequency of 120 kHz over the field of view from -7 to +7°. In this study, a sea trial of silent target detection using the prototype ANI system was conducted under only natural ocean ambient noise at Uchiura Bay, in November of 2010. There were many transients in the received sound. These transients were classified roughly into directly received noises and target scatterings. We proposed a classification method to extract transients of only target scatterings. By analyzing transients extracted as target scatterings, it was verified that the power spectrum density levels of the on-target directions were greater than those of the off-target directions in the higher frequency band over 60 kHz. These results showed that the targets are successfully detected under natural ocean ambient noise, mainly generated by snapping shrimps.

  3. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  4. Extraction of Airport Features from High Resolution Satellite Imagery for Design and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, Chris; Qiu, You-Liang; Jensen, John R.; Schill, Steven R.; Floyd, Mike

    2001-01-01

    The LPA Group, consisting of 17 offices located throughout the eastern and central United States is an architectural, engineering and planning firm specializing in the development of Airports, Roads and Bridges. The primary focus of this ARC project is concerned with assisting their aviation specialists who work in the areas of Airport Planning, Airfield Design, Landside Design, Terminal Building Planning and design, and various other construction services. The LPA Group wanted to test the utility of high-resolution commercial satellite imagery for the purpose of extracting airport elevation features in the glide path areas surrounding the Columbia Metropolitan Airport. By incorporating remote sensing techniques into their airport planning process, LPA wanted to investigate whether or not it is possible to save time and money while achieving the equivalent accuracy as traditional planning methods. The Affiliate Research Center (ARC) at the University of South Carolina investigated the use of remotely sensed imagery for the extraction of feature elevations in the glide path zone. A stereo pair of IKONOS panchromatic satellite images, which has a spatial resolution of 1 x 1 m, was used to determine elevations of aviation obstructions such as buildings, trees, towers and fence-lines. A validation dataset was provided by the LPA Group to assess the accuracy of the measurements derived from the IKONOS imagery. The initial goal of this project was to test the utility of IKONOS imagery in feature extraction using ERDAS Stereo Analyst. This goal was never achieved due to problems with ERDAS software support of the IKONOS sensor model and the unavailability of imperative sensor model information from Space Imaging. The obstacles encountered in this project pertaining to ERDAS Stereo Analyst and IKONOS imagery will be reviewed in more detail later in this report. As a result of the technical difficulties with Stereo Analyst, ERDAS OrthoBASE was used to derive aviation

  5. Texture feature extraction and analysis for polyp differentiation via computed tomography colonography

    PubMed Central

    Hu, Yifan; Song, Bowen; Han, Hao; Pickhardt, Perry J.; Zhu, Wei; Duan, Chaijie; Zhang, Hao; Barish, Matthew A.; Lascarides, Chris E.

    2016-01-01

    Image textures in computed tomography colonography (CTC) have great potential for differentiating non-neoplastic from neoplastic polyps and thus can advance the current CTC detection-only paradigm to a new level toward optimal polyp management to prevent the deadly colorectal cancer. However, image textures are frequently compromised due to noise smoothing and other error-correction operations in most CT image reconstructions. Furthermore, because of polyp orientation variation in patient space, texture features extracted in that space can vary accordingly, resulting in variable results. To address these issues, this study proposes an adaptive approach to extract and analyze the texture features for polyp differentiation. Firstly, derivative operations are performed on the CT intensity image to amplify the textures, e.g. in the 1st order derivative (gradient) and 2nd order derivative (curvature) images, with adequate noise control. Then the Haralick co-occurrence matrix (CM) is used to calculate texture measures along each of the 13 directions (defined by the 1st and 2nd order image voxel neighbors) through the polyp volume in the intensity, gradient and curvature images. Instead of taking the mean and range of each CM measure over the 13 directions as the so-called Haralick texture features, the Karhunen-Loeve transform is performed to map the 13 directions into an orthogonal coordinate system where all the CM measures are projected onto the new coordinates so that the resulted texture features are less dependent on the polyp spatial orientation variation. While the ideas for amplifying textures and stabilizing spatial variation are simple, their impacts are significant for the task of differentiating non-neoplastic from neoplastic polyps as demonstrated by experiments using 384 polyp datasets, of which 52 are non-neoplastic polyps and the rest are neoplastic polyps. By the merit of area under the curve of receiver operating characteristic, the innovative ideas

  6. Estimation and Extraction of Radar Signal Features Using Modified B Distribution and Particle Filters

    NASA Astrophysics Data System (ADS)

    Mikluc, Davorin; Bujaković, Dimitrije; Andrić, Milenko; Simić, Slobodan

    2016-09-01

    The research analyses the application of particle filters in estimating and extracting the features of radar signal time-frequency energy distribution. Time-frequency representation is calculated using modified B distribution, where the estimation process model represents one time bin. An adaptive criterion for the calculation of particle weighted coefficients whose main parameters are frequency integral squared error and estimated maximum of mean power spectral density per one time bin is proposed. The analysis of the suggested estimation application has been performed on a generated signal in the absence of any noise, and consequently on modelled and recorded real radar signals. The advantage of the suggested method is in the solution of the issue of interrupted estimations of instantaneous frequencies which appears when these estimations are determined according to maximum energy distribution, as in the case of intersecting frequency components in a multicomponent signal.

  7. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method. PMID:25227014

  8. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  9. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  10. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  11. Landsat TM image feature extraction and analysis of algal bloom in Taihu Lake

    NASA Astrophysics Data System (ADS)

    Wei, Yuchun; Chen, Wei

    2008-04-01

    This study developed an approach to the extraction and characterization of blue-green algal blooms of the study area Taihu Lake of China with the Landsat 5 TM imagery. Spectral feature of typical material within Taihu Lake were first compared, and the most sensitive spectral bands to blue-green algal blooms determined. Eight spectral indices were then designed using multiple TM spectral bands in order to maximize spectral contrast of different materials. The spectral curves describing the variation of reflectance at individual bands with the spectral indices were plotted, and the TM imagery was segmented using as thresholds the step-jumping points of the reflectance curves. The results indicate that the proposed multiple band-based spectral index NDAI2 (NDAI2 = (B4-B1)*(B5-B3)/(B4+B5+B1+B3) performed better than traditional vegetation indices NDVI and RVI in the extraction of blue-green algal information. In addition, this study indicates that the image segmentation using the points where reflectance has a sudden change resulted in a robust result, as well as a good applicability.

  12. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    PubMed Central

    Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  13. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  14. Decision tree for smart feature extraction from sleep HR in bipolar patients.

    PubMed

    Migliorini, Matteo; Mariani, Sara; Bianchi, Anna M

    2013-01-01

    The aim of this work is the creation of a completely automatic method for the extraction of informative parameters from peripheral signals recorded through a sensorized T-shirt. The acquired data belong to patients affected from bipolar disorder, and consist of RR series, body movements and activity type. The extracted features, i.e. linear and non-linear HRV parameters in the time domain, HRV parameters in the frequency domain, and parameters indicative of the sleep quality, profile and fragmentation, are of interest for the automatic classification of the clinical mood state. The analysis of this dataset, which is to be performed online and automatically, must address the problems related to the clinical protocol, which also includes a segment of recording in which the patient is awake, and to the nature of the device, which can be sensitive to movements and misplacement. Thus, the decision tree implemented in this study performs the detection and isolation of the sleep period, the elimination of corrupted recording segments and the checking of the minimum requirements of the signals for every parameter to be calculated. PMID:24110866

  15. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    PubMed

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  16. Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Levenson, Richard M.; Rimm, David L.

    2003-05-01

    Recent developments in imaging technology mean that it is now possible to obtain high-resolution histological image data at multiple wavelengths. This allows pathologists to image specimens over a full spectrum, thereby revealing (often subtle) distinctions between different types of tissue. With this type of data, the spectral content of the specimens, combined with quantitative spatial feature characterization may make it possible not only to identify the presence of an abnormality, but also to classify it accurately. However, such are the quantities and complexities of these data, that without new automated techniques to assist in the data analysis, the information contained in the data will remain inaccessible to those who need it. We investigate the application of a recently developed system for the automated analysis of multi-/hyper-spectral satellite image data to the problem of cancer detection from multispectral histopathology image data. The system provides a means for a human expert to provide training data simply by highlighting regions in an image using a computer mouse. Application of these feature extraction techniques to examples of both training and out-of-training-sample data demonstrate that these, as yet unoptimized, techniques already show promise in the discrimination between benign and malignant cells from a variety of samples.

  17. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  18. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  19. A novel feature extracting method of QRS complex classification for mobile ECG signals

    NASA Astrophysics Data System (ADS)

    Zhu, Lingyun; Wang, Dong; Huang, Xianying; Wang, Yue

    2007-12-01

    The conventional classification parameters of QRS complex suffer from larger activity rang of patients and lower signal to noise ratio in mobile cardiac telemonitoring system and can not meet the identification needs of ECG signal. Based on individual sinus heart rhythm template built with mobile ECG signals in time window, we present semblance index to extract the classification features of QRS complex precisely and expeditiously. Relative approximation r2 and absolute error r3 are used as estimating parameters of semblance between testing QRS complex and template. The evaluate parameters corresponding to QRS width and types are demonstrated to choose the proper index. The results show that 99.99 percent of the QRS complex for sinus and superventricular ECG signals can be distinguished through r2 but its average accurate ratio is only 46.16%. More than 97.84 percent of QRS complexes are identified using r3 but its accurate ratio to the sinus and superventricular is not better than r2. By the feature parameter of width, only 42.65 percent of QRS complexes are classified correctly, but its accurate ratio to the ventricular is superior to r2. To combine the respective superiority of three parameters, a nonlinear weighing computation of QRS width, r2 and r3 is introduced and the total classification accuracy up to 99.48% by combing indexes.

  20. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.

  1. Microscopic feature extraction from optical sections of contracting cardiac muscle cells recorded at high speed

    NASA Astrophysics Data System (ADS)

    Roos, Kenneth P.; Lake, David S.; Lubell, Bradford A.

    1991-05-01

    The rapid motion of microscopic features such as the cross-striations of contracting cardiac muscle cells are difficult to capture with conventional RS-170 video systems and image processing approaches. In this report, efforts to extract, enhance and analyze striation data from widefield optical sections of single contracting cells recorded with a charge-coupled device (CCD) video camera modified for high-speed RS-170 compatible operation are described. Each video field from the camera provides four 1/4 height images separated by 4 ms in time for a 240 Hz image acquisition rate. Data are continuously recorded on S-VHS video tape during each experiment. Selected image sequences are digitized field by field and stored in a computer system under automated software control. The four individual images in each video field are separated, geometrically corrected for time base error, and reassembled as a single sequence of images for interpretable visualization. The images are then processed with digital filters and gray scale expansion to preferentially enhance the cross-striations and minimize out of focus features. Regions within each image containing striations are identified and their positions determined and followed during the contraction cycle to obtain individual, regional and cellular sarcomere dynamics. This approach permits the critical evaluation of the magnitude, time course and uniformity of contractile function throughout the volume of a single cell with higher temporal and spatial resolutions than previously possible.

  2. High-speed imaging, acoustic features, and aeroacoustic computations of jet noise from Strombolian (and Vulcanian) explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Sesterhenn, J.; Scarlato, P.; Stampka, K.; Del Bello, E.; Pena Fernandez, J. J.; Gaudin, D.

    2014-05-01

    High-speed imaging of explosive eruptions at Stromboli (Italy), Fuego (Guatemala), and Yasur (Vanuatu) volcanoes allowed visualization of pressure waves from seconds-long explosions. From the explosion jets, waves radiate with variable geometry, timing, and apparent direction and velocity. Both the explosion jets and their wave fields are replicated well by numerical simulations of supersonic jets impulsively released from a pressurized vessel. The scaled acoustic signal from one explosion at Stromboli displays a frequency pattern with an excellent match to those from the simulated jets. We conclude that both the observed waves and the audible sound from the explosions are jet noise, i.e., the typical acoustic field radiating from high-velocity jets. Volcanic jet noise was previously quantified only in the infrasonic emissions from large, sub-Plinian to Plinian eruptions. Our combined approach allows us to define the spatial and temporal evolution of audible jet noise from supersonic jets in small-scale volcanic eruptions.

  3. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  4. Extraction of time and frequency features from grip force rates during dexterous manipulation.

    PubMed

    Mojtahedi, Keivan; Fu, Qiushi; Santello, Marco

    2015-05-01

    The time course of grip force from object contact to onset of manipulation has been extensively studied to gain insight into the underlying control mechanisms. Of particular interest to the motor neuroscience and clinical communities is the phenomenon of bell-shaped grip force rate (GFR) that has been interpreted as indicative of feedforward force control. However, this feature has not been assessed quantitatively. Furthermore, the time course of grip force may contain additional features that could provide insight into sensorimotor control processes. In this study, we addressed these questions by validating and applying two computational approaches to extract features from GFR in humans: 1) fitting a Gaussian function to GFR and quantifying the goodness of the fit [root-mean-square error, (RMSE)]; and 2) continuous wavelet transform (CWT), where we assessed the correlation of the GFR signal with a Mexican Hat function. Experiment 1 consisted of a classic pseudorandomized presentation of object mass (light or heavy), where grip forces developed to lift a mass heavier than expected are known to exhibit corrective responses. For Experiment 2, we applied our two techniques to analyze grip force exerted for manipulating an inverted T-shaped object whose center of mass was changed across blocks of consecutive trials. For both experiments, subjects were asked to grasp the object at either predetermined or self-selected grasp locations ("constrained" and "unconstrained" task, respectively). Experiment 1 successfully validated the use of RMSE and CWT as they correctly distinguished trials with versus without force corrective responses. RMSE and CWT also revealed that grip force is characterized by more feedback-driven corrections when grasping at self-selected contact points. Future work will examine the application of our analytical approaches to a broader range of tasks, e.g., assessment of recovery of sensorimotor function following clinical intervention, interlimb

  5. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    SciTech Connect

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  6. Functional source separation and hand cortical representation for a brain–computer interface feature extraction

    PubMed Central

    Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo

    2007-01-01

    A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This

  7. A data-driven feature extraction framework for predicting the severity of condition of congestive heart failure patients.

    PubMed

    Sideris, Costas; Alshurafa, Nabil; Pourhomayoun, Mohammad; Shahmohammadi, Farhad; Samy, Lauren; Sarrafzadeh, Majid

    2015-01-01

    In this paper, we propose a novel methodology for utilizing disease diagnostic information to predict severity of condition for Congestive Heart Failure (CHF) patients. Our methodology relies on a novel, clustering-based, feature extraction framework using disease diagnostic information. To reduce the dimensionality we identify disease clusters using cooccurence frequencies. We then utilize these clusters as features to predict patient severity of condition. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 patients. We compare our cluster-based feature set with another that incorporates the Charlson comorbidity score as a feature and demonstrate an accuracy improvement of up to 14% in the predictability of the severity of condition. PMID:26736808

  8. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  9. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    NASA Astrophysics Data System (ADS)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  10. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    PubMed

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  11. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  12. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  13. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  14. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  15. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  16. Sparsity-enabled signal decomposition using tunable Q-factor wavelet transform for fault feature extraction of gearbox

    NASA Astrophysics Data System (ADS)

    Cai, Gaigai; Chen, Xuefeng; He, Zhengjia

    2013-12-01

    Localized faults in gearboxes tend to result in periodic shocks and thus arouse periodic responses in vibration signals. Feature extraction has always been a key problem for localized fault diagnosis. This paper proposes a new fault feature extraction technique for gearboxes by using sparsity-enabled signal decomposition method. The sparsity-enabled signal decomposition method separates signals based on the oscillatory behavior of the signal rather than the frequency or scale. Thus, the fault feature can be nonlinearly extracted from vibration signals. During the implementation of the proposed method, tunable Q-factor wavelet transform, for which the Q-factor can be easily specified, is adopted to represent vibration signals in a sparse way, and then morphological component analysis (MCA) is employed to estimate and separate the distinct components. The corresponding optimization problem of MCA is solved by the split augmented Lagrangian shrinkage algorithm (SALSA). With the proposed method, vibration signals of the faulty gearbox can be nonlinearly decomposed into high-oscillatory component and low-oscillatory component which is the fault feature of gearboxes. To evaluate the performance of the proposed method, this paper investigates the effect of two parameters pertinent to MCA and SALSA: the Lagrange multiplier and the penalty parameter. The effectiveness of the proposed method is verified by both the simulated and practical gearbox vibration signals. Results show the proposed method outperforms empirical mode decomposition and spectral kurtosis in extracting fault features of gearboxes.

  17. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    PubMed Central

    Topouzelis, Konstantinos N.

    2008-01-01

    This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  18. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  19. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  20. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  1. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 2: frequency domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    Knowledge of the structure of a flow is really significant for the proper conduct a number of industrial processes. In this case a description of a two-phase flow regimes is possible by use of the time-series analysis e.g. in frequency domain. In this article the classical spectral analysis based on Fourier Transform (FT) and Short-Time Fourier Transform (STFT) were applied for analysis of signals obtained for water-air flow using gamma ray absorption. The presented method was illustrated by use data collected in experiments carried out on the laboratory hydraulic installation with a horizontal pipe of 4.5 m length and inner diameter of 30 mm equipped with two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals. Stochastic signals obtained from detectors for plug, bubble, and transitional plug - bubble flows were considered in this work. The recorded raw signals were analyzed and several features in the frequency domain were extracted using autospectral density function (ADF), cross-spectral density function (CSDF), and the STFT spectrogram. In result of a detail analysis it was found that the most promising to recognize of the flow structure are: maximum value of the CSDF magnitude, sum of the CSDF magnitudes in the selected frequency range, and the maximum value of the sum of selected amplitudes of STFT spectrogram.

  2. Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity.

    PubMed

    Bichler, Olivier; Querlioz, Damien; Thorpe, Simon J; Bourgoin, Jean-Philippe; Gamrat, Christian

    2012-08-01

    A biologically inspired approach to learning temporally correlated patterns from a spiking silicon retina is presented. Spikes are generated from the retina in response to relative changes in illumination at the pixel level and transmitted to a feed-forward spiking neural network. Neurons become sensitive to patterns of pixels with correlated activation times, in a fully unsupervised scheme. This is achieved using a special form of Spike-Timing-Dependent Plasticity which depresses synapses that did not recently contribute to the post-synaptic spike activation, regardless of their activation time. Competitive learning is implemented with lateral inhibition. When tested with real-life data, the system is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway, after only 10 min of traffic learning. Complete trajectories can be learned with a 98% detection rate using a second layer, still with unsupervised learning, and the system may be used as a car counter. The proposed neural network is extremely robust to noise and it can tolerate a high degree of synaptic and neuronal variability with little impact on performance. Such results show that a simple biologically inspired unsupervised learning scheme is capable of generating selectivity to complex meaningful events on the basis of relatively little sensory experience.

  3. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  4. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  5. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  6. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads. PMID:26829804

  7. Modeling ground vehicle acoustic signatures for analysis and synthesis

    SciTech Connect

    Haschke, G.; Stanfield, R.

    1995-07-01

    Security and weapon systems use acoustic sensor signals to classify and identify moving ground vehicles. Developing robust signal processing algorithms for this is expensive, particularly in presence of acoustic clutter or countermeasures. This paper proposes a parametric ground vehicle acoustic signature model to aid the system designer in understanding which signature features are important, developing corresponding feature extraction algorithms and generating low-cost, high-fidelity synthetic signatures for testing. The authors have proposed computer-generated acoustic signatures of armored, tracked ground vehicles to deceive acoustic-sensored smart munitions. They have developed quantitative measures of how accurately a synthetic acoustic signature matches those produced by actual vehicles. This paper describes parameters of the model used to generate these synthetic signatures and suggests methods for extracting these parameters from signatures of valid vehicle encounters. The model incorporates wide-bandwidth and narrow- bandwidth components that are modulated in a pseudo-random fashion to mimic the time dynamics of valid vehicle signatures. Narrow- bandwidth feature extraction techniques estimate frequency, amplitude and phase information contained in a single set of narrow frequency- band harmonics. Wide-bandwidth feature extraction techniques estimate parameters of a correlated-noise-floor model. Finally, the authors propose a method of modeling the time dynamics of the harmonic amplitudes as a means adding necessary time-varying features to the narrow-bandwidth signal components. The authors present results of applying this modeling technique to acoustic signatures recorded during encounters with one armored, tracked vehicle. Similar modeling techniques can be applied to security systems.

  8. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these

  9. Robust Sensing of Approaching Vehicles Relying on Acoustic Cues

    PubMed Central

    Mizumachi, Mitsunori; Kaminuma, Atsunobu; Ono, Nobutaka; Ando, Shigeru

    2014-01-01

    The latest developments in automobile design have allowed them to be equipped with various sensing devices. Multiple sensors such as cameras and radar systems can be simultaneously used for active safety systems in order to overcome blind spots of individual sensors. This paper proposes a novel sensing technique for catching up and tracking an approaching vehicle relying on an acoustic cue. First, it is necessary to extract a robust spatial feature from noisy acoustical observations. In this paper, the spatio-temporal gradient method is employed for the feature extraction. Then, the spatial feature is filtered out through sequential state estimation. A particle filter is employed to cope with a highly non-linear problem. Feasibility of the proposed method has been confirmed with real acoustical observations, which are obtained by microphones outside a cruising vehicle. PMID:24887038

  10. Acoustic biosensors

    PubMed Central

    Fogel, Ronen; Seshia, Ashwin A.

    2016-01-01

    Resonant and acoustic wave devices have been researched for several decades for application in the gravimetric sensing of a variety of biological and chemical analytes. These devices operate by coupling the measurand (e.g. analyte adsorption) as a modulation in the physical properties of the acoustic wave (e.g. resonant frequency, acoustic velocity, dissipation) that can then be correlated with the amount of adsorbed analyte. These devices can also be miniaturized with advantages in terms of cost, size and scalability, as well as potential additional features including integration with microfluidics and electronics, scaled sensitivities associated with smaller dimensions and higher operational frequencies, the ability to multiplex detection across arrays of hundreds of devices embedded in a single chip, increased throughput and the ability to interrogate a wider range of modes including within the same device. Additionally, device fabrication is often compatible with semiconductor volume batch manufacturing techniques enabling cost scalability and a high degree of precision and reproducibility in the manufacturing process. Integration with microfluidics handling also enables suitable sample pre-processing/separation/purification/amplification steps that could improve selectivity and the overall signal-to-noise ratio. Three device types are reviewed here: (i) bulk acoustic wave sensors, (ii) surface acoustic wave sensors, and (iii) micro/nano-electromechanical system (MEMS/NEMS) sensors. PMID:27365040

  11. Acoustic biosensors.

    PubMed

    Fogel, Ronen; Limson, Janice; Seshia, Ashwin A

    2016-06-30

    Resonant and acoustic wave devices have been researched for several decades for application in the gravimetric sensing of a variety of biological and chemical analytes. These devices operate by coupling the measurand (e.g. analyte adsorption) as a modulation in the physical properties of the acoustic wave (e.g. resonant frequency, acoustic velocity, dissipation) that can then be correlated with the amount of adsorbed analyte. These devices can also be miniaturized with advantages in terms of cost, size and scalability, as well as potential additional features including integration with microfluidics and electronics, scaled sensitivities associated with smaller dimensions and higher operational frequencies, the ability to multiplex detection across arrays of hundreds of devices embedded in a single chip, increased throughput and the ability to interrogate a wider range of modes including within the same device. Additionally, device fabrication is often compatible with semiconductor volume batch manufacturing techniques enabling cost scalability and a high degree of precision and reproducibility in the manufacturing process. Integration with microfluidics handling also enables suitable sample pre-processing/separation/purification/amplification steps that could improve selectivity and the overall signal-to-noise ratio. Three device types are reviewed here: (i) bulk acoustic wave sensors, (ii) surface acoustic wave sensors, and (iii) micro/nano-electromechanical system (MEMS/NEMS) sensors. PMID:27365040

  12. Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fenando

    2003-01-01

    In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.

  13. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  14. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  15. Effect of acoustic frequency and power density on the aqueous ultrasonic-assisted extraction of grape pomace (Vitis vinifera L.) - a response surface approach.

    PubMed

    González-Centeno, María Reyes; Knoerzer, Kai; Sabarez, Henry; Simal, Susana; Rosselló, Carmen; Femenia, Antoni

    2014-11-01

    Aqueous ultrasound-assisted extraction (UAE) of grape pomace was investigated by Response Surface Methodology (RSM) to evaluate the effect of acoustic frequency (40, 80, 120kHz), ultrasonic power density (50, 100, 150W/L) and extraction time (5, 15, 25min) on total phenolics, total flavonols and antioxidant capacity. All the process variables showed a significant effect on the aqueous UAE of grape pomace (p<0.05). The Box-Behnken Design (BBD) generated satisfactory mathematical models which accurately explain the behavior of the system; allowing to predict both the extraction yield of phenolic and flavonol compounds, and also the antioxidant capacity of the grape pomace extracts. The optimal UAE conditions for all response factors were a frequency of 40kHz, a power density of 150W/L and 25min of extraction time. Under these conditions, the aqueous UAE would achieve a maximum of 32.31mg GA/100g fw for total phenolics and 2.04mg quercetin/100g fw for total flavonols. Regarding the antioxidant capacity, the maximum predicted values were 53.47 and 43.66mg Trolox/100g fw for CUPRAC and FRAP assays, respectively. When comparing with organic UAE, in the present research, from 12% to 38% of total phenolic bibliographic values were obtained, but using only water as the extraction solvent, and applying lower temperatures and shorter extraction times. To the best of the authors' knowledge, no studies specifically addressing the optimization of both acoustic frequency and power density during aqueous-UAE of plant materials have been previously published.

  16. Multiple-input multiple-output (MIMO) analog-to-feature converter chipsets for sub-wavelength acoustic source localization and bearing estimation

    NASA Astrophysics Data System (ADS)

    Chakrabartty, Shantanu

    2010-04-01

    Localization of acoustic sources using miniature microphone arrays poses a significant challenge due to fundamental limitations imposed by the physics of sound propagation. With sub-wavelength distances between the microphones, resolving acute localization cues become difficult due to precision artifacts. In this work, we present the design of a miniature, microphone array sensor based on a patented Multiple-input Multiple-output (MIMO) analog-to-feature converter (AFC) chip-sets which overcomes the limitations due to precision artifacts. Measured results from fabricated prototypes demonstrate a bearing range of 0 degrees to 90 degrees with a resolution less than 2 degrees. The power dissipation of the MIMO-ADC chip-set for this task was measured to be less than 75 microwatts making it ideal for portable, battery powered sniper and gunshot detection applications.

  17. Thirty years of underwater acoustic signal processing in China

    NASA Astrophysics Data System (ADS)

    Li, Qihu

    2012-11-01

    Advances in technology and theory in 30 years of underwater acoustic signal processing and its applications in China are presented in this paper. The topics include research work in the field of underwater acoustic signal modeling, acoustic field matching, ocean waveguide and internal wave, the extraction and processing technique for acoustic vector signal information, the space/time correlation characteristics of low frequency acoustic channels, the invariant features of underwater target radiated noise, the transmission technology of underwater voice/image data and its anti-interference technique. Some frontier technologies in sonar design are also discussed, including large aperture towed line array sonar, high resolution synthetic aperture sonar, deep sea siren and deep sea manned subsea vehicle, diver detection sonar and demonstration projector of national ocean monitoring system in China, etc.

  18. The sidebar template and extraction of invariant feature of calligraphy and painting seal

    NASA Astrophysics Data System (ADS)

    Hu, Zheng-kun; Bao, Hong; Lou, Hai-tao

    2009-07-01

    The paper propose a novel seal extract method through template matching based on the characteristics of the external contour of the seal image in Chinese Painting and Calligraphy. By analyzing the characteristics of the seal edge, we obtain the priori knowledge of the seal edge, and set up the outline template of the seals, then design a template matching method by computing the distance difference between the outline template and the seal image edge which can extract seal image from Chinese Painting and Calligraphy effectively. This method is proved to have higher extraction rate by experiment results than the traditional image extract methods.

  19. DCT domain feature extraction scheme based on motor unit action potential of EMG signal for neuromuscular disease classification.

    PubMed

    Doulah, Abul Barkat Mollah Sayeed Ud; Fattah, Shaikh Anowarul; Zhu, Wei-Ping; Ahmad, M Omair

    2014-01-01

    A feature extraction scheme based on discrete cosine transform (DCT) of electromyography (EMG) signals is proposed for the classification of normal event and a neuromuscular disease, namely the amyotrophic lateral sclerosis. Instead of employing DCT directly on EMG data, it is employed on the motor unit action potentials (MUAPs) extracted from the EMG signal via a template matching-based decomposition technique. Unlike conventional MUAP-based methods, only one MUAP with maximum dynamic range is selected for DCT-based feature extraction. Magnitude and frequency values of a few high-energy DCT coefficients corresponding to the selected MUAP are used as the desired feature which not only reduces computational burden, but also offers better feature quality with high within-class compactness and between-class separation. For the purpose of classification, the K-nearest neighbourhood classifier is employed. Extensive analysis is performed on clinical EMG database and it is found that the proposed method provides a very satisfactory performance in terms of specificity, sensitivity and overall classification accuracy.

  20. DCT domain feature extraction scheme based on motor unit action potential of EMG signal for neuromuscular disease classification

    PubMed Central

    Doulah, Abul Barkat Mollah Sayeed Ud; Zhu, Wei-Ping; Ahmad, M. Omair

    2014-01-01

    A feature extraction scheme based on discrete cosine transform (DCT) of electromyography (EMG) signals is proposed for the classification of normal event and a neuromuscular disease, namely the amyotrophic lateral sclerosis. Instead of employing DCT directly on EMG data, it is employed on the motor unit action potentials (MUAPs) extracted from the EMG signal via a template matching-based decomposition technique. Unlike conventional MUAP-based methods, only one MUAP with maximum dynamic range is selected for DCT-based feature extraction. Magnitude and frequency values of a few high-energy DCT coefficients corresponding to the selected MUAP are used as the desired feature which not only reduces computational burden, but also offers better feature quality with high within-class compactness and between-class separation. For the purpose of classification, the K-nearest neighbourhood classifier is employed. Extensive analysis is performed on clinical EMG database and it is found that the proposed method provides a very satisfactory performance in terms of specificity, sensitivity and overall classification accuracy. PMID:26609372

  1. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  2. Graph-based sensor fusion for classification of transient acoustic signals.

    PubMed

    Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal

    2015-03-01

    Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches. PMID:25014986

  3. Consistent performance measurement of a system to detect masses in mammograms based on blind feature extraction

    PubMed Central

    2013-01-01

    Background Breast cancer continues to be a leading cause of cancer deaths among women, especially in Western countries. In the last two decades, many methods have been proposed to achieve a robust mammography‐based computer aided detection (CAD) system. A CAD system should provide high performance over time and in different clinical situations. I.e., the system should be adaptable to different clinical situations and should provide consistent performance. Methods We tested our system seeking a measure of the guarantee of its consistent performance. The method is based on blind feature extraction by independent component analysis (ICA) and classification by neural networks (NN) or SVM classifiers. The test mammograms were from the Digital Database for Screening Mammography (DDSM). This database was constructed collaboratively by four institutions over more than 10 years. We took advantage of this to train our system using the mammograms from each institution separately, and then testing it on the remaining mammograms. We performed another experiment to compare the results and thus obtain the measure sought. This experiment consists in to form the learning sets with all available prototypes regardless of the institution in which them were generated, obtaining in that way the overall results. Results The smallest variation from comparing the results of the testing set in each experiment (performed by training the system using the mammograms from one institution and testing with the remaining) with those of the overall result, considering the success rate for an intermediate decision maker threshold, was roughly 5%, and the largest variation was roughly 17%. But, if we considere the area under ROC curve, the smallest variation was close to 4%, and the largest variation was about a 6%. Conclusions Considering the heterogeneity in the datasets used to train and test our system in each case, we think that the variation of performance obtained when the results are

  4. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  5. Testing the Self-Similarity Exponent to Feature Extraction in Motor Imagery Based Brain Computer Interface Systems

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Sánchez-Granero, Miguel Ángel; García-Laencina, Pedro J.; Fernández-Martínez, Manuel; Serna, José; Roca-Dorda, Joaquín

    2015-12-01

    A Brain Computer Interface (BCI) system is a tool not requiring any muscle action to transmit information. Acquisition, preprocessing, feature extraction (FE), and classification of electroencephalograph (EEG) signals constitute the main steps of a motor imagery BCI. Among them, FE becomes crucial for BCI, since the underlying EEG knowledge must be properly extracted into a feature vector. Linear approaches have been widely applied to FE in BCI, whereas nonlinear tools are not so common in literature. Thus, the main goal of this paper is to check whether some Hurst exponent and fractal dimension based estimators become valid indicators to FE in motor imagery BCI. The final results obtained were not optimal as expected, which may be due to the fact that the nature of the analyzed EEG signals in these motor imagery tasks were not self-similar enough.

  6. Novel Folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing

    NASA Astrophysics Data System (ADS)

    Zabalza, Jaime; Ren, Jinchang; Yang, Mingqiang; Zhang, Yi; Wang, Jun; Marshall, Stephen; Han, Junwei

    2014-07-01

    As a widely used approach for feature extraction and data reduction, Principal Components Analysis (PCA) suffers from high computational cost, large memory requirement and low efficacy in dealing with large dimensional datasets such as Hyperspectral Imaging (HSI). Consequently, a novel Folded-PCA is proposed, where the spectral vector is folded into a matrix to allow the covariance matrix to be determined more efficiently. With this matrix-based representation, both global and local structures are extracted to provide additional information for data classification. Moreover, both the computational cost and the memory requirement have been significantly reduced. Using Support Vector Machine (SVM) for classification on two well-known HSI datasets and one Synthetic Aperture Radar (SAR) dataset in remote sensing, quantitative results are generated for objective evaluations. Comprehensive results have indicated that the proposed Folded-PCA approach not only outperforms the conventional PCA but also the baseline approach where the whole feature sets are used.

  7. A Novel Feature Extraction Approach Using Window Function Capturing and QPSO-SVM for Enhancing Electronic Nose Performance

    PubMed Central

    Guo, Xiuzhen; Peng, Chao; Zhang, Songlin; Yan, Jia; Duan, Shukai; Wang, Lidan; Jia, Pengfei; Tian, Fengchun

    2015-01-01

    In this paper, a novel feature extraction approach which can be referred to as moving window function capturing (MWFC) has been proposed to analyze signals of an electronic nose (E-nose) used for detecting types of infectious pathogens in rat wounds. Meanwhile, a quantum-behaved particle swarm optimization (QPSO) algorithm is implemented in conjunction with support vector machine (SVM) for realizing a synchronization optimization of the sensor array and SVM model parameters. The results prove the efficacy of the proposed method for E-nose feature extraction, which can lead to a higher classification accuracy rate compared to other established techniques. Meanwhile it is interesting to note that different classification results can be obtained by changing the types, widths or positions of windows. By selecting the optimum window function for the sensor response, the performance of an E-nose can be enhanced. PMID:26131672

  8. Improved tests for global warming trend extraction in ocean acoustic travel-time data. Final technical report

    SciTech Connect

    Bottone, S.; Gray, H.L.; Woodward, W.A.

    1996-04-01

    A possible indication of the existence of global climate warming is the presence of a trend in the travel time of an acoustic signal along several ocean paths over a period of many years. This report describes new, improved tests for testing for linear trend in time series data with correlated residuals. We introduce a bootstrap based procedure to test for trend in this setting which is better adapted to controlling the significance levels. The procedure is applied to acoustic travel time data generated by the MASIG ocean model. It is shown how to generalize the improved method to multivariate, or vector, time series, which, in the ocean acoustics setting, corresponds to travel time data on many ocean paths. An appendix describes the TRENDS software, which enables the user to perform these calculations using a graphical user interface (GUI).

  9. Ultrasound Color Doppler Image Segmentation and Feature Extraction in MCP and Wrist Region in Evaluation of Rheumatoid Arthritis.

    PubMed

    Snekhalatha, U; Muthubhairavi, V; Anburajan, M; Gupta, Neelkanth

    2016-09-01

    The present study focuses on automatically to segment the blood flow pattern of color Doppler ultrasound in hand region of rheumatoid arthritis patients and to correlate the extracted the statistical features and color Doppler parameters with standard parameters. Thirty patients with rheumatoid arthritis (RA) and their total of 300 joints of both the hands, i.e., 240 MCP and 60 wrists were examined in this study. Ultrasound color Doppler of both the hands of all the patients was obtained. Automated segmentation of color Doppler image was performed using color enhancement scaling based segmentation algorithm. The region of interest is fixed in the MCP joints and wrist of the hand. Features were extracted from the defined ROI of the segmented output image. The color fraction was measured using Mimics software. The standard parameters such as HAQ score, DAS 28 score, and ESR was obtained for all the patients. The color fraction tends to be increased in wrist and MCP3 joints which indicate the increased blood flow pattern and color Doppler activity as part of inflammation in hand joints of RA. The ESR correlated significantly with the feature extracted parameters such as mean, standard deviation and entropy in MCP3, MCP4 joint and the wrist region. The developed automated color image segmentation algorithm provides a quantitative analysis for diagnosis and assessment of RA. The correlation study between the color Doppler parameters with the standard parameters provides moral significance in quantitative analysis of RA in MCP3 joint and the wrist region.

  10. Effects of charge design features on parameters of acoustic and seismic waves and cratering, for SMR chemical surface explosions

    NASA Astrophysics Data System (ADS)

    Gitterman, Y.

    2012-04-01

    time delays clearly separated for the shot of IMI explosives (characterized by much higher detonation velocity than ANFO). Additionally acoustic records at close distances from WSMR explosions Distant Image (2440 tons of ANFO) and Minor Uncle (2725 tons of ANFO) were used to extend the charge and distance range for the SS delay scaled relationship, that showed consistency with SMR ANFO shots. The developed specific charge design contributed to the success of this unique dual Sayarim explosion experiment, providing the strongest GT0 sources since the establishment of the IMS network, that demonstrated clearly the most favorable westward/ eastward infrasound propagation up to 3400/6250 km according to appropriate summer/winter weather pattern and stratospheric wind directions, respectively, and thus verified empirically common models of infrasound propagation in the atmosphere. The research was supported by the CTBTO, Vienna, and the Israel Ministry of Immigrant Absorption.

  11. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  12. Identification of cancerous gastric cells based on common features extracted from hyperspectral microscopic images

    PubMed Central

    Zhu, Siqi; Su, Kang; Liu, Yumeng; Yin, Hao; Li, Zhen; Huang, Furong; Chen, Zhenqiang; Chen, Weidong; Zhang, Ge; Chen, Yihong

    2015-01-01

    We construct a microscopic hyperspectral imaging system to distinguish between normal and cancerous gastric cells. We study common transmission-spectra features that only emerge when the samples are dyed with hematoxylin and eosin (H&E) stain. Subsequently, we classify the obtained visible-range transmission spectra of the samples into three zones. Distinct features are observed in the spectral responses between the normal and cancerous cell nuclei in each zone, which depend on the pH level of the cell nucleus. Cancerous gastric cells are precisely identified according to these features. The average cancer-cell identification accuracy obtained with a backpropagation algorithm program trained with these features is 95%. PMID:25909000

  13. A procedure for the extraction of airglow features in the presence of strong background radiation

    NASA Astrophysics Data System (ADS)

    Swift, W. R.; Torr, D. G.; Hamilton, C.; Dougani, H.; Torr, M. R.

    1990-09-01

    A technique is developed that can be used to derive the total intensity of band emissions from twilight airglow measurements when the basic spectral signature of the band to be considered is known. The method is designed to automatically extract total band or line intensities of a signal imbedded in background radiation several orders of magnitude greater in brightness. It is shown that the technique developed can reliably measure the intensity of both weak and strong band and line emissions in the presence of strong twilight background radiation. The method of extraction is shown as part of a general purpose spectral analysis program written in VAX FORTRAN. This extraction procedure has been used successfully on emissions of Fel, Ca(+), N2(+) (1N) (0-0) and (0-1), OH in the near UV. OI red (630nm) and green (558nm) lines in the visible, and the OH Meinel bands and O(+) (2P) 732 nm in the near IR.

  14. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  15. Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification.

    PubMed

    Siuly, Siuly; Li, Yan

    2015-04-01

    The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature.

  16. Feature Extraction of PDV Challenge Data Set A with Digital Down Shift (DDS)

    SciTech Connect

    Tunnell, T. W.

    2012-10-18

    This slide-show is about data analysis in photonic Doppler velocimetry. The digital down shift subtracts a specified velocity (frequency) from all components in the Fourier frequency domain and generates both the down shifted in-phase and out-of-phase waveforms so that phase and displacement can be extracted through a continuous unfold of the arctangent.

  17. Use of Landsat-derived temporal profiles for corn-soybean feature extraction and classification

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Carnes, J. G.; Austin, W. W.

    1982-01-01

    A physical model is presented, which has been derived from multitemporal-multispectral data acquired by Landsat satellites to describe the behavior and new features that are crop specific. A feasibility study over 40 sites was performed to classify the segment pixels into those of corn, soybeans, and others using the new features and a linear classifier. Results agree well with other existing methods, and it is shown the multitemporal-multispectral scanner data can be transformed into two parameters that are closely related to the target of interest and thus can be used in classification. The approach is less time intensive than other techniques and requires labeling of only pure pixels.

  18. Feature extraction of rolling bearing’s early weak fault based on EEMD and tunable Q-factor wavelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Hongchao; Chen, Jin; Dong, Guangming

    2014-10-01

    When early weak fault emerges in rolling bearing the fault feature is too weak to extract using the traditional fault diagnosis methods such as Fast Fourier Transform (FFT) and envelope demodulation. The tunable Q-factor wavelet transform (TQWT) is the improvement of traditional one single Q-factor wavelet transform, and it is very fit for separating the low Q-factor transient impact component from the high Q-factor sustained oscillation components when fault emerges in rolling bearing. However, it is hard to extract the rolling bearing’ early weak fault feature perfectly using the TQWT directly. Ensemble empirical mode decomposition (EEMD) is the improvement of empirical mode decomposition (EMD) which not only has the virtue of self-adaptability of EMD but also overcomes the mode mixing problem of EMD. The original signal of rolling bearing’ early weak fault is decomposed by EEMD and several intrinsic mode functions (IMFs) are obtained. Then the IMF with biggest kurtosis index value is selected and handled by the TQWT subsequently. At last, the envelope demodulation method is applied on the low Q-factor transient impact component and satisfactory extraction result is obtained.

  19. Differential phase acoustic microscope for micro-NDE

    NASA Technical Reports Server (NTRS)

    Waters, David D.; Pusateri, T. L.; Huang, S. R.

    1992-01-01

    A differential phase scanning acoustic microscope (DP-SAM) was developed, fabricated, and tested in this project. This includes the acoustic lens and transducers, driving and receiving electronics, scanning stage, scanning software, and display software. This DP-SAM can produce mechanically raster-scanned acoustic microscopic images of differential phase, differential amplitude, or amplitude of the time gated returned echoes of the samples. The differential phase and differential amplitude images provide better image contrast over the conventional amplitude images. A specially designed miniature dual beam lens was used to form two foci to obtain the differential phase and amplitude information of the echoes. High image resolution (1 micron) was achieved by applying high frequency (around 1 GHz) acoustic signals to the samples and placing two foci close to each other (1 micron). Tone burst was used in this system to obtain a good estimation of the phase differences between echoes from the two adjacent foci. The system can also be used to extract the V(z) acoustic signature. Since two acoustic beams and four receiving modes are available, there are 12 possible combinations to produce an image or a V(z) scan. This provides a unique feature of this system that none of the existing acoustic microscopic systems can provide for the micro-nondestructive evaluation applications. The entire system, including the lens, electronics, and scanning control software, has made a competitive industrial product for nondestructive material inspection and evaluation and has attracted interest from existing acoustic microscope manufacturers.

  20. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  1. Application of computer-extracted breast tissue texture features in predicting false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Choi, Jae Y.; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2014-03-01

    Mammographic texture features have been shown to have value in breast cancer risk assessment. Previous models have also been developed that use computer-extracted mammographic features of breast tissue complexity to predict the risk of false-positive (FP) recall from breast cancer screening with digital mammography. This work details a novel locallyadaptive parenchymal texture analysis algorithm that identifies and extracts mammographic features of local parenchymal tissue complexity potentially relevant for false-positive biopsy prediction. This algorithm has two important aspects: (1) the adaptive nature of automatically determining an optimal number of region-of-interests (ROIs) in the image and each ROI's corresponding size based on the parenchymal tissue distribution over the whole breast region and (2) characterizing both the local and global mammographic appearances of the parenchymal tissue that could provide more discriminative information for FP biopsy risk prediction. Preliminary results show that this locallyadaptive texture analysis algorithm, in conjunction with logistic regression, can predict the likelihood of false-positive biopsy with an ROC performance value of AUC=0.92 (p<0.001) with a 95% confidence interval [0.77, 0.94]. Significant texture feature predictors (p<0.05) included contrast, sum variance and difference average. Sensitivity for false-positives was 51% at the 100% cancer detection operating point. Although preliminary, clinical implications of using prediction models incorporating these texture features may include the future development of better tools and guidelines regarding personalized breast cancer screening recommendations. Further studies are warranted to prospectively validate our findings in larger screening populations and evaluate their clinical utility.

  2. Automated extraction of absorption features from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geophysical and Environmental Research Imaging Spectrometer (GERIS) data

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Calvin, Wendy M.; Seznec, Olivier

    1988-01-01

    Automated techniques were developed for the extraction and characterization of absorption features from reflectance spectra. The absorption feature extraction algorithms were successfully tested on laboratory, field, and aircraft imaging spectrometer data. A suite of laboratory spectra of the most common minerals was analyzed and absorption band characteristics tabulated. A prototype expert system was designed, implemented, and successfully tested to allow identification of minerals based on the extracted absorption band characteristics. AVIRIS spectra for a site in the northern Grapevine Mountains, Nevada, have been characterized and the minerals sericite (fine grained muscovite) and dolomite were identified. The minerals kaolinite, alunite, and buddingtonite were identified and mapped for a site at Cuprite, Nevada, using the feature extraction algorithms on the new Geophysical and Environmental Research 64 channel imaging spectrometer (GERIS) data. The feature extraction routines (written in FORTRAN and C) were interfaced to the expert system (written in PROLOG) to allow both efficient processing of numerical data and logical spectrum analysis.

  3. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  4. Unsupervised clustering analyses of features extraction for a caries computer-assisted diagnosis using dental fluorescence images

    NASA Astrophysics Data System (ADS)

    Bessani, Michel; da Costa, Mardoqueu M.; Lins, Emery C. C. C.; Maciel, Carlos D.

    2014-02-01

    Computer-assisted diagnoses (CAD) are performed by systems with embedded knowledge. These systems work as a second opinion to the physician and use patient data to infer diagnoses for health problems. Caries is the most common oral disease and directly affects both individuals and the society. Here we propose the use of dental fluorescence images as input of a caries computer-assisted diagnosis. We use texture descriptors together with statistical pattern recognition techniques to measure the descriptors performance for the caries classification task. The data set consists of 64 fluorescence images of in vitro healthy and carious teeth including different surfaces and lesions already diagnosed by an expert. The texture feature extraction was performed on fluorescence images using RGB and YCbCr color spaces, which generated 35 different descriptors for each sample. Principal components analysis was performed for the data interpretation and dimensionality reduction. Finally, unsupervised clustering was employed for the analysis of the relation between the output labeling and the diagnosis of the expert. The PCA result showed a high correlation between the extracted features; seven components were sufficient to represent 91.9% of the original feature vectors information. The unsupervised clustering output was compared with the expert classification resulting in an accuracy of 96.88%. The results show the high accuracy of the proposed approach in identifying carious and non-carious teeth. Therefore, the development of a CAD system for caries using such an approach appears to be promising.

  5. Improving the detection of wind fields from LIDAR aerosol backscatter using feature extraction

    NASA Astrophysics Data System (ADS)

    Bickel, Brady R.; Rotthoff, Eric R.; Walters, Gage S.; Kane, Timothy J.; Mayor, Shane D.

    2016-04-01

    The tracking of winds and atmospheric features has many applications, from predicting and analyzing weather patterns in the upper and lower atmosphere to monitoring air movement from pig and chicken farms. Doppler LIDAR systems exist to quantify the underlying wind speeds, but cost of these systems can sometimes be relatively high, and processing limitations exist. The alternative is using an incoherent LIDAR system to analyze aerosol backscatter. Improving the detection and analysis of wind information from aerosol backscatter LIDAR systems will allow for the adoption of these relatively low cost instruments in environments where the size, complexity, and cost of other options are prohibitive. Using data from a simple aerosol backscatter LIDAR system, we attempt to extend the processing capabilities by calculating wind vectors through image correlation techniques to improve the detection of wind features.

  6. An ultra low power feature extraction and classification system for wearable seizure detection.

    PubMed

    Page, Adam; Pramod Tim Oates, Siddharth; Mohsenin, Tinoosh

    2015-08-01

    In this paper we explore the use of a variety of machine learning algorithms for designing a reliable and low-power, multi-channel EEG feature extractor and classifier for predicting seizures from electroencephalographic data (scalp EEG). Different machine learning classifiers including k-nearest neighbor, support vector machines, naïve Bayes, logistic regression, and neural networks are explored with the goal of maximizing detection accuracy while minimizing power, area, and latency. The input to each machine learning classifier is a 198 feature vector containing 9 features for each of the 22 EEG channels obtained over 1-second windows. All classifiers were able to obtain F1 scores over 80% and onset sensitivity of 100% when tested on 10 patients. Among five different classifiers that were explored, logistic regression (LR) proved to have minimum hardware complexity while providing average F-1 score of 91%. Both ASIC and FPGA implementations of logistic regression are presented and show the smallest area, power consumption, and the lowest latency when compared to the previous work. PMID:26737931

  7. Fourier-based shape feature extraction technique for computer-aided B-Mode ultrasound diagnosis of breast tumor.

    PubMed

    Lee, Jong-Ha; Seong, Yeong Kyeong; Chang, Chu-Ho; Park, Jinman; Park, Moonho; Woo, Kyoung-Gu; Ko, Eun Young

    2012-01-01

    Early detection of breast tumor is critical in determining the best possible treatment approach. Due to its superiority compared with mammography in its possibility to detect lesions in dense breast tissue, ultrasound imaging has become an important modality in breast tumor detection and classification. This paper discusses the novel Fourier-based shape feature extraction techniques that provide enhanced classification accuracy for breast tumor in the computer-aided B-mode ultrasound diagnosis system. To demonstrate the effectiveness of the proposed method, experiments were performed using 4,107 ultrasound images with 2,508 malignancy cases. Experimental results show that the breast tumor classification accuracy of the proposed technique was 15.8%, 5.43%, 17.32%, and 13.86% higher than the previous shape features such as number of protuberances, number of depressions, lobulation index, and dissimilarity, respectively. PMID:23367430

  8. Fourier-based shape feature extraction technique for computer-aided B-Mode ultrasound diagnosis of breast tumor.

    PubMed

    Lee, Jong-Ha; Seong, Yeong Kyeong; Chang, Chu-Ho; Park, Jinman; Park, Moonho; Woo, Kyoung-Gu; Ko, Eun Young

    2012-01-01

    Early detection of breast tumor is critical in determining the best possible treatment approach. Due to its superiority compared with mammography in its possibility to detect lesions in dense breast tissue, ultrasound imaging has become an important modality in breast tumor detection and classification. This paper discusses the novel Fourier-based shape feature extraction techniques that provide enhanced classification accuracy for breast tumor in the computer-aided B-mode ultrasound diagnosis system. To demonstrate the effectiveness of the proposed method, experiments were performed using 4,107 ultrasound images with 2,508 malignancy cases. Experimental results show that the breast tumor classification accuracy of the proposed technique was 15.8%, 5.43%, 17.32%, and 13.86% higher than the previous shape features such as number of protuberances, number of depressions, lobulation index, and dissimilarity, respectively.

  9. Sharp mandibular bone irregularities after lower third molar extraction: Incidence, clinical features and risk factors

    PubMed Central

    Alves-Pereira, Daniela; Valmaseda-Castellón, Eduard; Laskin, Daniel M.; Berini-Aytés, Leonardo; Gay-Escoda, Cosme

    2013-01-01

    Objectives: The purpose of this study was to determine the incidence and clinical symptoms associated with sharp mandibular bone irregularities (SMBI) after lower third molar extraction and to identify possible risk factors for this complication. Study Design: A mixed study design was used. A retrospective cohort study of 1432 lower third molar extractions was done to determine the incidence of SMBI and a retrospective case-control study was done to determine potential demographic and etiologic factors by comparing those patients with postoperative SMBI with controls. Results: Twelve SMBI were found (0.84%). Age was the most important risk factor for this complication. The operated side and the presence of an associated radiolucent image were also significantly related to the development of mandibular bone irregularities. The depth of impaction of the tooth might also be an important factor since erupted or nearly erupted third molars were more frequent in the SMBI group. Conclusions: SMBI are a rare postoperative complication after lower third molar removal. Older patients having left side lower third molars removed are more likely to develop this problem. The treatment should be the removal of the irregularity when the patient is symptomatic. Key words:Third molar, postoperative complication, bone irregularities, age. PMID:23524429

  10. Combining Spectral and Texture Features Using Random Forest Algorithm: Extracting Impervious Surface Area in Wuhan

    NASA Astrophysics Data System (ADS)

    Shao, Zhenfeng; Zhang, Yuan; Zhang, Lei; Song, Yang; Peng, Minjun

    2016-06-01

    Impervious surface area (ISA) is one of the most important indicators of urban environments. At present, based on multi-resolution remote sensing images, numerous approaches have been proposed to extract impervious surface, using statistical estimation, sub-pixel classification and spectral mixture analysis method of sub-pixel analysis. Through these methods, impervious surfaces can be effectively applied to regional-scale planning and management. However, for the large scale region, high resolution remote sensing images can provide more details, and therefore they will be more conducive to analysis environmental monitoring and urban management. Since the purpose of this study is to map impervious surfaces more effectively, three classification algorithms (random forests, decision trees, and artificial neural networks) were tested for their ability to map impervious surface. Random forests outperformed the decision trees, and artificial neural networks in precision. Combining the spectral indices and texture, random forests is applied to impervious surface extraction with a producer's accuracy of 0.98, a user's accuracy of 0.97, and an overall accuracy of 0.98 and a kappa coefficient of 0.97.

  11. Spectral Morphology for Feature Extraction from Multi- and Hyper-spectral Imagery.

    SciTech Connect

    Harvey, N. R.; Porter, R. B.

    2005-01-01

    For accurate and robust analysis of remotely-sensed imagery it is necessary to combine the information from both spectral and spatial domains in a meaningful manner. The two domains are intimately linked: objects in a scene are defined in terms of both their composition and their spatial arrangement, and cannot accurately be described by information from either of these two domains on their own. To date there have been relatively few methods for combining spectral and spatial information concurrently. Most techniques involve separate processing for extracting spatial and spectral information. In this paper we will describe several extensions to traditional morphological operators that can treat spectral and spatial domains concurrently and can be used to extract relationships between these domains in a meaningful way. This includes the investgation and development of suitable vector-ordering metrics and machine-learning-based techniques for optimizing the various parameters of the morphological operators, such as morphological operator, structuring element and vector ordering metric. We demonstrate their application to a range of multi- and hyper-spectral image analysis problems.

  12. A Bayes optimal matrix-variate LDA for extraction of spatio-spectral features from EEG signals.

    PubMed

    Mahanta, Mohammad Shahin; Aghaei, Amirhossein S; Plataniotis, Konstantinos N

    2012-01-01

    Classification of mental states from electroencephalogram (EEG) signals is used for many applications in areas such as brain-computer interfacing (BCI). When represented in the frequency domain, the multichannel EEG signal can be considered as a two-directional spatio-spectral data of high dimensionality. Extraction of salient features using feature extractors such as the commonly used linear discriminant analysis (LDA) is an essential step for the classification of these signals. However, multichannel EEG is naturally in matrix-variate format, while LDA and other traditional feature extractors are designed for vector-variate input. Consequently, these methods require a prior vectorization of the EEG signals, which ignores the inherent matrix-variate structure in the data and leads to high computational complexity. A matrix-variate formulation of LDA have previously been proposed. However, this heuristic formulation does not provide the Bayes optimality benefits of LDA. The current paper proposes a Bayes optimal matrix-variate formulation of LDA based on a matrix-variate model for the spatio-spectral EEG patterns. The proposed formulation also provides a strategy to select the most significant features among the different rows and columns.

  13. Application of the empirical mode decomposition to the extraction of features from EEG signals for mental task classification.

    PubMed

    Diez, Pablo F; Mut, Vicente; Laciar, Eric; Torres, Abel; Avila, Enrique

    2009-01-01

    In this work, it is proposed a technique for the feature extraction of electroencephalographic (EEG) signals for classification of mental tasks which is an important part in the development of Brain Computer Interfaces (BCI). The Empirical Mode Decomposition (EMD) is a method capable to process nonstationary and nonlinear signals as the EEG. This technique was applied in EEG signals of 7 subjects performing 5 mental tasks. For each mode obtained from the EMD and each EEG channel were computed six features: Root Mean Square (RMS), Variance, Shannon Entropy, Lempel-Ziv Complexity Value, and Central and Maximum Frequencies, obtaining a feature vector of 180 components. The Wilks' lambda parameter was applied for the selection of the most important variables reducing the dimensionality of the feature vector. The classification of mental tasks was performed using Linear Discriminate Analysis (LD) and Neural Networks (NN). With this method, the average classification over all subjects in database was 91+/-5% and 87+/-5% using LD and NN, respectively. It was concluded that the EMD allows getting better performances in the classification of mental tasks than the obtained with other traditional methods, like spectral analysis.

  14. Rotor acoustic monitoring system (RAMS): a fatigue crack detection system

    NASA Astrophysics Data System (ADS)

    Schoess, Jeffrey N.

    1996-05-01

    The Rotor Acoustic Monitoring System (RAMS) is an embedded structural health monitoring system to demonstrate the ability to detect rotor head fatigue cracks and provide early warning of propagating fatigue cracks in rotor components of Navy helicopters. The concept definition effort was performed to assess the feasibility of detecting rotor head fatigue cracks using bulk- wave wide-bandwidth acoustic emission technology. A wireless piezo-based transducer system is being designed to capture rotor fatigue data in real time and perform acoustic emission (AE) event detection, feature extraction, and classification. A flight test effort will be performed to characterize rotor acoustic background noise and flight environment characteristics. The long- term payoff of the RAMS technology includes structural integrity verification and leak detection for large industrial tanks, and nuclear plant cooling towers could be performed using the RAMS AE technology. A summary of the RAMS concept, bench-level AE fatigue testing, and results are presented.

  15. Fatigue damage localization using time-domain features extracted from nonlinear Lamb waves

    NASA Astrophysics Data System (ADS)

    Hong, Ming; Su, Zhongqing; Lu, Ye; Cheng, Li

    2014-03-01

    Nonlinear guided waves are sensitive to small-scale fatigue damage that may hardly be identified by traditional techniques. A characterization method for fatigue damage is established based on nonlinear Lamb waves in conjunction with the use of a piezoelectric sensor network. Theories on nonlinear Lamb waves for damage detection are first introduced briefly. Then, the ineffectiveness of using pure frequency-domain information of nonlinear wave signals for locating damage is discussed. With a revisit to traditional gross-damage localization techniques based on the time of flight, the idea of using temporal signal features of nonlinear Lamb waves to locate fatigue damage is introduced. This process involves a time-frequency analysis that enables the damage-induced nonlinear signal features, which are either undiscernible in the original time history or uninformative in the frequency spectrum, to be revealed. Subsequently, a finite element modeling technique is employed, accounting for various sources of nonlinearities in a fatigued medium. A piezoelectric sensor network is configured to actively generate and acquire probing Lamb waves that involve damageinduced nonlinear features. A probability-based diagnostic imaging algorithm is further proposed, presenting results in diagnostic images intuitively. The approach is experimentally verified on a fatigue-damaged aluminum plate, showing reasonably good accuracy. Compared to existing nonlinear ultrasonics-based inspection techniques, this approach uses a permanently attached sensor network that well accommodates automated online health monitoring; more significantly, it utilizes time-domain information of higher-order harmonics from time-frequency analysis, and demonstrates a great potential for quantitative characterization of small-scale damage with improved localization accuracy.

  16. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains.

    PubMed

    Gonçalves, Ariadne Barbosa; Souza, Junior Silva; Silva, Gercina Gonçalves da; Cereda, Marney Pascoli; Pott, Arnildo; Naka, Marco Hiroshi; Pistori, Hemerson

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  17. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains.

    PubMed

    Gonçalves, Ariadne Barbosa; Souza, Junior Silva; Silva, Gercina Gonçalves da; Cereda, Marney Pascoli; Pott, Arnildo; Naka, Marco Hiroshi; Pistori, Hemerson

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper.

  18. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains

    PubMed Central

    Souza, Junior Silva; da Silva, Gercina Gonçalves

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  19. A wavelet transform based feature extraction and classification of cardiac disorder.

    PubMed

    Sumathi, S; Beaulah, H Lilly; Vanithamani, R

    2014-09-01

    This paper approaches an intellectual diagnosis system using hybrid approach of Adaptive Neuro-Fuzzy Inference System (ANFIS) model for classification of Electrocardiogram (ECG) signals. This method is based on using Symlet Wavelet Transform for analyzing the ECG signals and extracting the parameters related to dangerous cardiac arrhythmias. In these particular parameters were used as input of ANFIS classifier, five most important types of ECG signals they are Normal Sinus Rhythm (NSR), Atrial Fibrillation (AF), Pre-Ventricular Contraction (PVC), Ventricular Fibrillation (VF), and Ventricular Flutter (VFLU) Myocardial Ischemia. The inclusion of ANFIS in the complex investigating algorithms yields very interesting recognition and classification capabilities across a broad spectrum of biomedical engineering. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies. The results give importance to that the proposed ANFIS model illustrates potential advantage in classifying the ECG signals. The classification accuracy of 98.24 % is achieved. PMID:25023652

  20. Structural features and in vivo antitussive activity of the water extracted polymer from Glycyrrhiza glabra.

    PubMed

    Saha, Sudipta; Nosál'ová, Gabriella; Ghosh, Debjani; Flešková, Dana; Capek, Peter; Ray, Bimalendu

    2011-05-01

    Antitussive drugs are amongst the most widely used medications worldwide; however no new class of drugs has been introduced into the market for many years. Herein, we have analyzed the water-extracted polymeric fraction (WE) of Glycyrrhiza glabra. This arabinogalactan protein enriched fraction, ≥ 85% of which gets precipitated with Yariv reagent, consisted mainly of 3- and 3,6-linked galactopyranosyl, and 5- and 3,5-linked arabinofuranosyl residues. Peroral administration of this polymer in a dose of 50mg/kg body weight decreases the number of citric acid induced cough efforts in guinea pigs more effectively than codeine. It does not induce significant change in the values of specific airway resistance or provoked any observable adverse effects.

  1. Morphological feature extraction for the classification of digital images of cancerous tissues.

    PubMed

    Thiran, J P; Macq, B

    1996-10-01

    This paper presents a new method for automatic recognition of cancerous tissues from an image of a microscopic section. Based on the shape and the size analysis of the observed cells, this method provides the physician with nonsubjective numerical values for four criteria of malignancy. This automatic approach is based on mathematical morphology, and more specifically on the use of Geodesy. This technique is used first to remove the background noise from the image and then to operate a segmentation of the nuclei of the cells and an analysis of their shape, their size, and their texture. From the values of the extracted criteria, an automatic classification of the image (cancerous or not) is finally operated.

  2. [Tensor Feature Extraction Using Multi-linear Principal Component Analysis for Brain Computer Interface].

    PubMed

    Wang, Jinjia; Yang, Liang

    2015-06-01

    The brain computer interface (BCI) can be used to control external devices directly through electroencephalogram (EEG) information. A multi-linear principal component analysis (MPCA) framework was used for the limitations of tensor form of multichannel EEG signals processing based on traditional principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA). Based on MPCA, we used the projection of tensor-matrix to achieve the goal of dimensionality reduction and features exaction. Then we used the Fisher linear classifier to classify the features. Furthermore, we used this novel method on the BCI competition II dataset 4 and BCI competition N dataset 3 in the experiment. The second-order tensor representation of time-space EEG data and the third-order tensor representation of time-space-frequency BEG data were used. The best results that were superior to those from other dimensionality reduction methods were obtained by much debugging on parameter P and testQ. For two-order tensor, the highest accuracy rates could be achieved as 81.0% and 40.1%, and for three-order tensor, the highest accuracy rates were 76.0% and 43.5%, respectively.

  3. Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fernando

    2004-01-01

    The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.

  4. Anthocyanin characterization, total phenolic quantification and antioxidant features of some Chilean edible berry extracts.

    PubMed

    Brito, Anghel; Areche, Carlos; Sepúlveda, Beatriz; Kennelly, Edward J; Simirgiotis, Mario J

    2014-01-01

    The anthocyanin composition and HPLC fingerprints of six small berries endemic of the VIII region of Chile were investigated using high resolution mass analysis for the first time (HR-ToF-ESI-MS). The antioxidant features of the six endemic species were compared, including a variety of blueberries which is one of the most commercially significant berry crops in Chile. The anthocyanin fingerprints obtained for the fruits were compared and correlated with the antioxidant features measured by the bleaching of the DPPH radical, the ferric reducing antioxidant power (FRAP), the superoxide anion scavenging activity assay (SA), and total content of phenolics, flavonoids and anthocyanins measured by spectroscopic methods. Thirty one anthocyanins were identified, and the major ones were quantified by HPLC-DAD, mostly branched 3-O-glycosides of delphinidin, cyanidin, petunidin, peonidin and malvidin. Three phenolic acids (feruloylquinic acid, chlorogenic acid, and neochlorogenic acid) and five flavonols (hyperoside, isoquercitrin, quercetin, rutin, myricetin and isorhamnetin) were also identified. Calafate fruits showed the highest antioxidant activity (2.33 ± 0.21 μg/mL in the DPPH assay), followed by blueberry (3.32 ± 0.18 μg/mL), and arrayán (5.88 ± 0.21), respectively. PMID:25072199

  5. Application of the interferometric synthetic aperture radar (IFSAR) correlation file for use in feature extraction

    NASA Astrophysics Data System (ADS)

    Simental, Edmundo; Guthrie, Verner

    2002-11-01

    Fine resolution synthetic aperture radar (SAR) and interferometric synthetic aperture radar (IFSAR) have been widely used for the purpose of creating viable terrain maps. A map is only as good as the information it contains. Therefore, it is a major priority of the mapmakers that the data that goes into the process be as complete and accurate as possible. In this paper, we analyze IFSAR correlation/de-correlation data to help in terrain feature information. The correlation data contains the correlation coefficient between the bottom and top IFSAR radar channels. It is a 32-bit floating-point number. This number is a measure of the absolute complex correlation coefficient between the signals that are received in each channel. The range of these numbers in between zero and unity. Unity indicates 100% correlation and zero indicates no correlation. The correlation is a function of several system parameters including signal-to-noise ratio (SNR), local geometry, and scattering mechanism. These two radar channels are physically close together and signals are inherently highly correlated. Significant difference is found beyond the fourth decimal place. We have concentrated our analysis on small features that are easily detectable in the correlation/de-correlation data and not so easily detectable in the elevation or magnitude data.

  6. Structured covariance principal component analysis for real-time onsite feature extraction and dimensionality reduction in hyperspectral imaging.

    PubMed

    Zabalza, Jaime; Ren, Jinchang; Ren, Jie; Liu, Zhe; Marshall, Stephen

    2014-07-10

    Presented in a three-dimensional structure called a hypercube, hyperspectral imaging suffers from a large volume of data and high computational cost for data analysis. To overcome such drawbacks, principal component analysis (PCA) has been widely applied for feature extraction and dimensionality reduction. However, a severe bottleneck is how to compute the PCA covariance matrix efficiently and avoid computational difficulties, especially when the spatial dimension of the hypercube is large. In this paper, structured covariance PCA (SC-PCA) is proposed for fast computation of the covariance matrix. In line with how spectral data is acquired in either the push-broom or tunable filter method, different implementation schemes of SC-PCA are presented. As the proposed SC-PCA can determine the covariance matrix from partial covariance matrices in parallel even without prior deduction of the mean vector, it facilitates real-time data analysis while the hypercube is acquired. This has significantly reduced the scale of required memory and also allows efficient onsite feature extraction and data reduction to benefit subsequent tasks in coding and compression, transmission, and analytics of hyperspectral data.

  7. Multi-channel EEG signal feature extraction and pattern recognition on horizontal mental imagination task of 1-D cursor movement for brain computer interface.

    PubMed

    Serdar Bascil, M; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2015-06-01

    Brain computer interfaces (BCIs), based on multi-channel electroencephalogram (EEG) signal processing convert brain signal activities to machine control commands. It provides new communication way with a computer by extracting electroencephalographic activity. This paper, deals with feature extraction and classification of horizontal mental task pattern on 1-D cursor movement from EEG signals. The hemispherical power changes are computed and compared on alpha & beta frequencies and horizontal cursor control extracted with only mental imagination of cursor movements. In the first stage, features are extracted with the well-known average signal power or power difference (alpha and beta) method. Principal component analysis is used for reducing feature dimensions. All features are classified and the mental task patterns are recognized by three neural network classifiers which learning vector quantization, multilayer neural network and probabilistic neural network due to obtaining acceptable good results and using successfully in pattern recognition via k-fold cross validation technique.

  8. Acoustic scaling of anisotropic flow in shape-engineered events: implications for extraction of the specific shear viscosity of the quark gluon plasma

    NASA Astrophysics Data System (ADS)

    Lacey, Roy A.; Reynolds, D.; Taranenko, A.; Ajitanand, N. N.; Alexander, J. M.; Liu, Fu-Hu; Gu, Yi; Mwai, A.

    2016-10-01

    It is shown that the acoustic scaling patterns of anisotropic flow for different event shapes at a fixed collision centrality (shape-engineered events), provide robust constraints for the event-by-event fluctuations in the initial-state density distribution from ultrarelativistic heavy ion collisions. The empirical scaling parameters also provide a dual-path method for extracting the specific shear viscosity {(η /s)}{QGP} of the quark-gluon plasma (QGP) produced in these collisions. A calibration of these scaling parameters via detailed viscous hydrodynamical model calculations, gives {(η /s)}{QGP} estimates for the plasma produced in collisions of Au + Au (\\sqrt{{s}{NN}}=0.2 {TeV}) and Pb + Pb (\\sqrt{{s}{NN}}=2.76 {TeV}). The estimates are insensitive to the initial-state geometry models considered.

  9. Hyperspectral Feature Detection Onboard the Earth Observing One Spacecraft using Superpixel Segmentation and Endmember Extraction

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Bornstein, Benjamin; Bue, Brian D.; Tran, Daniel Q.; Chien, Steve A.; Castano, Rebecca

    2012-01-01

    We present a demonstration of onboard hyperspectral image processing with the potential to reduce mission downlink requirements. The system detects spectral endmembers and then uses them to map units of surface material. This summarizes the content of the scene, reveals spectral anomalies warranting fast response, and reduces data volume by two orders of magnitude. We have integrated this system into the Autonomous Science craft Experiment for operational use onboard the Earth Observing One (EO-1) Spacecraft. The system does not require prior knowledge about spectra of interest. We report on a series of trial overflights in which identical spacecraft commands are effective for autonomous spectral discovery and mapping for varied target features, scenes and imaging conditions.

  10. Method for extracting forward acoustic wave components from rotating microphone measurements in the inlets of turbofan engines

    NASA Technical Reports Server (NTRS)

    Cicon, D. E.; Sofrin, T. G.

    1995-01-01

    This report describes a procedure for enhancing the use of the basic rotating microphone system so as to determine the forward propagating mode components of the acoustic field in the inlet duct at the microphone plane in order to predict more accurate far-field radiation patterns. In addition, a modification was developed to obtain, from the same microphone readings, the forward acoustic modes generated at the fan face, which is generally some distance downstream of the microphone plane. Both these procedures employ computer-simulated calibrations of sound propagation in the inlet duct, based upon the current radiation code. These enhancement procedures were applied to previously obtained rotating microphone data for the 17-inch ADP fan. The forward mode components at the microphone plane were obtained and were used to compute corresponding far-field directivities. The second main task of the program involved finding the forward wave modes generated at the fan face in terms of the same total radial mode structure measured at the microphone plane. To obtain satisfactory results with the ADP geometry it was necessary to limit consideration to the propagating modes. Sensitivity studies were also conducted to establish guidelines for use in other fan configurations.

  11. Structural features and antitumor activity of a novel polysaccharide from alkaline extract of Phellinus linteus mycelia.

    PubMed

    Pei, Juan-Juan; Wang, Zhen-Bin; Ma, Hai-Le; Yan, Jing-Kun

    2015-01-22

    A novel high molecular weight polysaccharide (PL-N1) was isolated from alkaline extract of the cultured Phellinus linteus mycelia. The weight average molecular weight (Mw) of PL-N1 was estimated at 343,000kDa. PL-N1 comprised arabinose, xylose, glucose, and galactose in the molar ratio of 4.0:6.7:1.3:1.0. The chemical structure of PL-N1 was investigated by FTIR and NMR spectroscopies and methylation analysis. The results showed that the backbone of PL-N1 comprised (1→4)-linked β-D-xylopyranosyl residues, (1→2)-linked α-D-xylopyranosyl residues, (1→4)-linked α-D-glucopyranosyl residues, (1→5)-linked β-D-arabinofuranosyl residues, (1→4)-linked β-D-xylopyranosyl residues which branched at O-2, and (1→4)-linked β-D-galactopyranosyl residues which branched at O-6. The branches consisted of (1→)-linked α-D-arabinofuranosyl residues. Antitumor activity assay in vitro showed that PL-N1 could inhibit the growth of HepG2 cells to a certain extent in a dose-dependent manner. Thus, PL-N1 may be developed as a potential, natural antitumor agent and functional food.

  12. Gait feature extraction in Parkinson's disease using low-cost accelerometers.

    PubMed

    Stamatakis, Julien; Crémers, Julien; Maquet, Didier; Macq, Benoit; Garraux, Gaëtan

    2011-01-01

    The clinical hallmarks of Parkinson's disease (PD) are movement poverty and slowness (i.e. bradykinesia), muscle rigidity, limb tremor or gait disturbances. Parkinson's gait includes slowness, shuffling, short steps, freezing of gait (FoG) and/or asymmetries in gait. There are currently no validated clinical instruments or device that allow a full characterization of gait disturbances in PD. As a step towards this goal, a four accelerometer-based system is proposed to increase the number of parameters that can be extracted to characterize parkinsonian gait disturbances such as FoG or gait asymmetries. After developing the hardware, an algorithm has been developed, that automatically epoched the signals on a stride-by-stride basis and quantified, among others, the gait velocity, the stride time, the stance and swing phases, the single and double support phases or the maximum acceleration at toe-off, as validated by visual inspection of video recordings during the task. The results obtained in a PD patient and a healthy volunteer are presented. The FoG detection will be improved using time-frequency analysis and the system is about to be validated with a state-of-the-art 3D movement analysis system.

  13. Computer-aided diagnosis of interstitial lung disease: a texture feature extraction and classification approach

    NASA Astrophysics Data System (ADS)

    Vargas-Voracek, Rene; McAdams, H. Page; Floyd, Carey E., Jr.

    1998-06-01

    An approach for the classification of normal or abnormal lung parenchyma from selected regions of interest (ROIs) of chest radiographs is presented for computer aided diagnosis of interstitial lung disease (ILD). The proposed approach uses a feed-forward neural network to classify each ROI based on a set of isotropic texture measures obtained from the joint grey level distribution of pairs of pixels separated by a specific distance. Two hundred ROIs, each 64 X 64 pixels in size (11 X 11 mm), were extracted from digitized chest radiographs for testing. Diagnosis performance was evaluated with the leave-one-out method. Classification of independent ROIs achieved a sensitivity of 90% and a specificity of 84% with an area under the receiver operating characteristic curve of 0.85. The diagnosis for each patient was correct for all cases when a `majority vote' criterion for the classification of the corresponding ROIs was applied to issue a normal or ILD patient classification. The proposed approach is a simple, fast, and consistent method for computer aided diagnosis of ILD with a very good performance. Further research will include additional cases, including differential diagnosis among ILD manifestations.

  14. Analysis of Unresolved Spectral Infrared Signature for the Extraction of Invariant Features

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Payne, T.; Wilhelm, S.; Gregory, S.; Skinner, M.; Rudy, R.; Russell, R.; Brown, J.; Dao, P.

    2010-09-01

    This paper demonstrates a simple analytical technique for extraction of spectral radiance values for the solar panel and body from an unresolved spectral infrared signature of 3-axis stabilized low-earth orbit (LEO) satellites. It uses data collected by The Aerospace Corporation’s Broad-band Array Spectrograph System (BASS) instrument at the Air Force Maui Optical and Supercomputing (AMOS) site. The observation conditions were such that the signatures were due to the emissive phenomenology and contribution of earthshine was negligible. The analysis is based on a two-facet orientation model of the satellite. This model captures the basic, known behavior of the satellite body and its solar panels. One facet points to nadir and the second facet tracks the sun. The facet areas are unknown. Special conditions are determined on the basis of observational geometry that allows separation of the spectral radiance values of the solar panel and body. These values remain unchanged (i.e., are invariant) under steady illumination conditions even if the signature appears different from one observation to another. In addition, they provide information on the individual spectral makeup of the satellite solar panel and body materials.

  15. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  16. Acoustic Similarity and Dichotic Listening.

    ERIC Educational Resources Information Center

    Benson, Peter

    1978-01-01

    An experiment tests conjectures that right ear advantage (REA) has an auditory origin in competition or interference between acoustically similar stimuli and that feature-sharing effect (FSE) has its origin in assignment of features of phonetically similar stimuli. No effect on the REA for acoustic similarity, and a clear effect of acoustic…

  17. On the use of wavelet for extracting feature patterns from Multitemporal google earth satellite data sets

    NASA Astrophysics Data System (ADS)

    Lasaponara, R.

    2012-04-01

    The great amount of multispectral VHR satellite images, even available free of charge in Google earth has opened new strategic challenges in the field of remote sensing for archaeological studies. These challenges substantially deal with: (i) the strategic exploitation of satellite data as much as possible, (ii) the setting up of effective and reliable automatic and/or semiautomatic data processing strategies and (iii) the integration with other data sources from documentary resources to the traditional ground survey, historical documentation, geophysical prospection, etc. VHR satellites provide high resolution data which can improve knowledge on past human activities providing precious qualitative and quantitative information developed to such an extent that currently they share many of the physical characteristics of aerial imagery. This makes them ideal for investigations ranging from a local to a regional scale (see. for example, Lasaponara and Masini 2006a,b, 2007a, 2011; Masini and Lasaponara 2006, 2007, Sparavigna, 2010). Moreover, satellite data are still the only data source for research performed in areas where aerial photography is restricted because of military or political reasons. Among the main advantages of using satellite remote sensing compared to traditional field archaeology herein we briefly focalize on the use of wavelet data processing for enhancing google earth satellite data with particular reference to multitemporal datasets. Study areas selected from Southern Italy, Middle East and South America are presented and discussed. Results obtained point out the use of automatic image enhancement can successfully applied as first step of supervised classification and intelligent data analysis for semiautomatic identification of features of archaeological interest. Reference Lasaponara R, Masini N (2006a) On the potential of panchromatic and multispectral Quickbird data for archaeological prospection. Int J Remote Sens 27: 3607-3614. Lasaponara R

  18. On the use of wavelet for extracting feature patterns from Multitemporal google earth satellite data sets

    NASA Astrophysics Data System (ADS)

    Lasaponara, R.

    2012-04-01

    The great amount of multispectral VHR satellite images, even available free of charge in Google earth has opened new strategic challenges in the field of remote sensing for archaeological studies. These challenges substantially deal with: (i) the strategic exploitation of satellite data as much as possible, (ii) the setting up of effective and reliable automatic and/or semiautomatic data processing strategies and (iii) the integration with other data sources from documentary resources to the traditional ground survey, historical documentation, geophysical prospection, etc. VHR satellites provide high resolution data which can improve knowledge on past human activities providing precious qualitative and quantitative information developed to such an extent that currently they share many of the physical characteristics of aerial imagery. This makes them ideal for investigations ranging from a local to a regional scale (see. for example, Lasaponara and Masini 2006a,b, 2007a, 2011; Masini and Lasaponara 2006, 2007, Sparavigna, 2010). Moreover, satellite data are still the only data source for research performed in areas where aerial photography is restricted because of military or political reasons. Among the main advantages of using satellite remote sensing compared to traditional field archaeology herein we briefly focalize on the use of wavelet data processing for enhancing google earth satellite data with particular reference to multitemporal datasets. Study areas selected from Southern Italy, Middle East and South America are presented and discussed. Results obtained point out the use of automatic image enhancement can successfully applied as first step of supervised classification and intelligent data analysis for semiautomatic identification of features of archaeological interest. Reference Lasaponara R, Masini N (2006a) On the potential of panchromatic and multispectral Quickbird data for archaeological prospection. Int J Remote Sens 27: 3607-3614. Lasaponara R

  19. GEOEYE-1 Satellite Stereo-Pair DEM Extraction Using Scale-Invariant Feature Transform on a Parallel Processing Platform

    NASA Astrophysics Data System (ADS)

    Daliakopoulos, Ioannis; Tsanis, Ioannis

    2013-04-01

    A module for Digital Elevation Model (DEM) extraction from Very High Resolution (VHR) satellite stereo-pair imagery was developed. A procedure for parallel processing of cascading image tiles is used for handling the large datasets requirements of VHR satellite imagery. The Scale-Invariant Feature Transform (SIFT) algorithm is used to detect potentially homogeneous features in the members of the stereo-pair. The resulting feature pairs are filtered using the RANdom SAmple Consensus (RANSAC) algorithm by using a variable distance threshold. Finally, homogeneous pairs are converted to point cloud ground coordinates for DEM generation. The module is tested with a 0.5mx0.5m Geoeye-1 stereo-pair acquired over an area of 25sqkm in the island of Crete, Greece. A sensitivity analysis is performed to determine the optimum module parameterization. The criteria of average point spacing irregularity is introduced to evaluate the quality and assess the effective resolution of the produced DEMs. The resulting 1.5mx1.5m DEM has superior detail over the 2m and 5m DEMs used as reference and yields a Root Mean Square Error (RMSE) of about 1m compared to ground truth measurements.

  20. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  1. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  2. Rapid discrimination and feature extraction of three Chamaecyparis species by static-HS/GC-MS.

    PubMed

    Chen, Ying-Ju; Lin, Chun-Ya; Cheng, Sen-Sung; Chang, Shang-Tzen

    2015-01-28

    This study aimed to develop a rapid and accurate analytical method for discriminating three Chamaecyparis species (C. formosensis, C. obtusa, and C. obtusa var. formosana) that could not be easily distinguished by volatile compounds. A total of 23 leaf samples from three species were analyzed by static-headspace (static-HS) coupled with gas chromatography-mass spectrometry (GC-MS). The static-HS procedure, whose experimental parameters were properly optimized, yielded a high Pearson correlation-based similarity between essential oil and VOC composition (r = 0.555-0.999). Thirty-six major constituents were identified; along with the results of cluster analysis (CA), a large variation in contents among the three different species was observed. Principal component analysis (PCA) methods illustrated graphically the relationships between characteristic components and tree species. It was clearly demonstrated that the static-HS-based procedure enhanced greatly the speed of precise analysis of chemical fingerprint in small sample amounts, thus providing a fast and reliable tool for the prediction of constituent characteristics in essential oil, and also offering good opportunities for studying the role of these feature compounds in chemotaxonomy or ecophysiology. PMID:25590241

  3. Rapid discrimination and feature extraction of three Chamaecyparis species by static-HS/GC-MS.

    PubMed

    Chen, Ying-Ju; Lin, Chun-Ya; Cheng, Sen-Sung; Chang, Shang-Tzen

    2015-01-28

    This study aimed to develop a rapid and accurate analytical method for discriminating three Chamaecyparis species (C. formosensis, C. obtusa, and C. obtusa var. formosana) that could not be easily distinguished by volatile compounds. A total of 23 leaf samples from three species were analyzed by static-headspace (static-HS) coupled with gas chromatography-mass spectrometry (GC-MS). The static-HS procedure, whose experimental parameters were properly optimized, yielded a high Pearson correlation-based similarity between essential oil and VOC composition (r = 0.555-0.999). Thirty-six major constituents were identified; along with the results of cluster analysis (CA), a large variation in contents among the three different species was observed. Principal component analysis (PCA) methods illustrated graphically the relationships between characteristic components and tree species. It was clearly demonstrated that the static-HS-based procedure enhanced greatly the speed of precise analysis of chemical fingerprint in small sample amounts, thus providing a fast and reliable tool for the prediction of constituent characteristics in essential oil, and also offering good opportunities for studying the role of these feature compounds in chemotaxonomy or ecophysiology.

  4. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  5. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  6. Study on image feature extraction and classification for human colorectal cancer using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei

    2011-06-01

    Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.

  7. Segmentation and feature extraction of cervical spine x-ray images

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1999-05-01

    As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.

  8. A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction

    PubMed Central

    Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording

  9. Acoustic Remote Sensing

    NASA Astrophysics Data System (ADS)

    Dowling, David R.; Sabra, Karim G.

    2015-01-01

    Acoustic waves carry information about their source and collect information about their environment as they propagate. This article reviews how these information-carrying and -collecting features of acoustic waves that travel through fluids can be exploited for remote sensing. In nearly all cases, modern acoustic remote sensing involves array-recorded sounds and array signal processing to recover multidimensional results. The application realm for acoustic remote sensing spans an impressive range of signal frequencies (10-2 to 107 Hz) and distances (10-2 to 107 m) and involves biomedical ultrasound imaging, nondestructive evaluation, oil and gas exploration, military systems, and Nuclear Test Ban Treaty monitoring. In the past two decades, approaches have been developed to robustly localize remote sources; remove noise and multipath distortion from recorded signals; and determine the acoustic characteristics of the environment through which the sound waves have traveled, even when the recorded sounds originate from uncooperative sources or are merely ambient noise.

  10. An Integrated Front-End Readout And Feature Extraction System for the BaBar Drift Chamber

    SciTech Connect

    Zhang, Jinlong; /Colorado U.

    2006-08-10

    The BABAR experiment has been operating at SLAC's PEP-II asymmetric B-Factory since 1999. The accelerator has achieved more than three times its original design luminosity of 3 x 10{sup 33} cm{sup -2} s{sup -1}, with plans for an additional factor of three in the next two years. To meet the experiment's performance requirements in the face of significantly higher trigger and background rates, the drift chamber's front-end readout system has been redesigned around the Xilinx Spartan 3 FPGA. The new system implements analysis and feature-extraction of digitized waveforms in the front-end, reducing the data bandwidth required by a factor of four.

  11. Sensors Fusion based Online Mapping and Features Extraction of Mobile Robot in the Road Following and Roundabout

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Mailah, Musa; Yussof, Wan Azhar B.; Hamedon, Zamzuri B.; Yussof, Zulkifli B.; Majeed, Anwar P. P.

    2016-02-01

    A road feature extraction based mapping system using a sensor fusion technique for mobile robot navigation in road environments is presented in this paper. The online mapping of mobile robot is performed continuously in the road environments to find the road properties that enable the robot to move from a certain start position to pre-determined goal while discovering and detecting the roundabout. The sensors fusion involving laser range finder, camera and odometry which are installed in a new platform, are used to find the path of the robot and localize it within its environments. The local maps are developed using camera and laser range finder to recognize the roads borders parameters such as road width, curbs and roundabout. Results show the capability of the robot with the proposed algorithms to effectively identify the road environments and build a local mapping for road following and roundabout.

  12. Automated feature extraction for the classification of human in vivo 13C NMR spectra using statistical pattern recognition and wavelets.

    PubMed

    Tate, A R; Watson, D; Eglen, S; Arvanitis, T N; Thomas, E L; Bell, J D

    1996-06-01

    If magnetic resonance spectroscopy (MRS) is to become a useful tool in clinical medicine, it will be necessary to find reliable methods for analyzing and classifying MRS data. Automated methods are desirable because they can remove user bias and can deal with large amounts of data, allowing the use of all the available information. In this study, techniques for automatically extracting features for the classification of MRS in vivo data are investigated. Among the techniques used were wavelets, principal component analysis, and linear discriminant function analysis. These techniques were tested on a set of 75 in vivo 13C spectra of human adipose tissue from subjects from three different dietary groups (vegan, vegetarian, and omnivore). It was found that it was possible to assign automatically 94% of the vegans and omnivores to their correct dietary groups, without the need for explicit identification or measurement of peaks.

  13. Comparison of sEMG-Based Feature Extraction and Motion Classification Methods for Upper-Limb Movement

    PubMed Central

    Guo, Shuxiang; Pang, Muye; Gao, Baofeng; Hirata, Hideyuki; Ishihara, Hidenori

    2015-01-01

    The surface electromyography (sEMG) technique is proposed for muscle activation detection and intuitive control of prostheses or robot arms. Motion recognition is widely used to map sEMG signals to the target motions. One of the main factors preventing the implementation of this kind of method for real-time applications is the unsatisfactory motion recognition rate and time consumption. The purpose of this paper is to compare eight combinations of four feature extraction methods (Root Mean Square (RMS), Detrended Fluctuation Analysis (DFA), Weight Peaks (WP), and Muscular Model (MM)) and two classifiers (Neural Networks (NN) and Support Vector Machine (SVM)), for the task of mapping sEMG signals to eight upper-limb motions, to find out the relation between these methods and propose a proper combination to solve this issue. Seven subjects participated in the experiment and six muscles of the upper-limb were selected to record sEMG signals. The experimental results showed that NN classifier obtained the highest recognition accuracy rate (88.7%) during the training process while SVM performed better in real-time experiments (85.9%). For time consumption, SVM took less time than NN during the training process but needed more time for real-time computation. Among the four feature extraction methods, WP had the highest recognition rate for the training process (97.7%) while MM performed the best during real-time tests (94.3%). The combination of MM and NN is recommended for strict real-time applications while a combination of MM and SVM will be more suitable when time consumption is not a key requirement. PMID:25894941

  14. Identification of lesion images from gastrointestinal endoscope based on feature extraction of combinational methods with and without learning process.

    PubMed

    Liu, Ding-Yun; Gan, Tao; Rao, Ni-Ni; Xing, Yao-Wen; Zheng, Jie; Li, Sang; Luo, Cheng-Si; Zhou, Zhong-Jun; Wan, Yong-Li

    2016-08-01

    The gastrointestinal endoscopy in this study refers to conventional gastroscopy and wireless capsule endoscopy (WCE). Both of these techniques produce a large number of images in each diagnosis. The lesion detection done by hand from the images above is time consuming and inaccurate. This study designed a new computer-aided method to detect lesion images. We initially designed an algorithm named joint diagonalisation principal component analysis (JDPCA), in which there are no approximation, iteration or inverting procedures. Thus, JDPCA has a low computational complexity and is suitable for dimension reduction of the gastrointestinal endoscopic images. Then, a novel image feature extraction method was established through combining the algorithm of machine learning based on JDPCA and conventional feature extraction algorithm without learning. Finally, a new computer-aided method is proposed to identify the gastrointestinal endoscopic images containing lesions. The clinical data of gastroscopic images and WCE images containing the lesions of early upper digestive tract cancer and small intestinal bleeding, which consist of 1330 images from 291 patients totally, were used to confirm the validation of the proposed method. The experimental results shows that, for the detection of early oesophageal cancer images, early gastric cancer images and small intestinal bleeding images, the mean values of accuracy of the proposed method were 90.75%, 90.75% and 94.34%, with the standard deviations (SDs) of 0.0426, 0.0334 and 0.0235, respectively. The areas under the curves (AUCs) were 0.9471, 0.9532 and 0.9776, with the SDs of 0.0296, 0.0285 and 0.0172, respectively. Compared with the traditional related methods, our method showed a better performance. It may therefore provide worthwhile guidance for improving the efficiency and accuracy of gastrointestinal disease diagnosis and is a good prospect for clinical application.

  15. Relative brain signature: a population-based feature extraction procedure to identify functional biomarkers in the brain of alcoholics

    PubMed Central

    Karamzadeh, Nader; Ardeshirpour, Yasaman; Kellman, Matthew; Chowdhry, Fatima; Anderson, Afrouz; Chorlian, David; Wegman, Edward; Gandjbakhche, Amir

    2015-01-01

    Background A novel feature extraction technique, Relative-Brain-Signature (RBS), which characterizes subjects' relationship to populations with distinctive neuronal activity, is presented. The proposed method transforms a set of Electroencephalography's (EEG) time series in high dimensional space to a space of fewer dimensions by projecting time series onto orthogonal subspaces. Methods We apply our technique to an EEG data set of 77 abstinent alcoholics and 43 control subjects. To characterize subjects' relationship to the alcoholic and control populations, one RBS vector with respect to the alcoholic and one with respect to the control population is constructed. We used the extracted RBS vectors to identify functional biomarkers over the brain of alcoholics. To achieve this goal, the classification algorithm was used to categorize subjects into alcoholics and controls, which resulted in 78% accuracy. Results and Conclusions Using the results of the classification, regions with distinctive functionality in alcoholic subjects are detected. These affected regions, with respect to their spatial extent, are frontal, anterior frontal, centro-parietal, parieto-occiptal, and occipital lobes. The distribution of these regions over the scalp indicates that the impact of the alcohol in the cerebral cortex of the alcoholics is spatially diffuse. Our finding suggests that these regions engage more of the right hemisphere relative to the left hemisphere of the alcoholics' brain. PMID:26221569

  16. Quantitative analysis of ex vivo colorectal epithelium using an automated feature extraction algorithm for microendoscopy image data.

    PubMed

    Prieto, Sandra P; Lai, Keith K; Laryea, Jonathan A; Mizell, Jason S; Muldoon, Timothy J

    2016-04-01

    Qualitative screening for colorectal polyps via fiber bundle microendoscopy imaging has shown promising results, with studies reporting high rates of sensitivity and specificity, as well as low interobserver variability with trained clinicians. A quantitative image quality control and image feature extraction algorithm (QFEA) was designed to lessen the burden of training and provide objective data for improved clinical efficacy of this method. After a quantitative image quality control step, QFEA extracts field-of-view area, crypt area, crypt circularity, and crypt number per image. To develop and validate this QFEA, a training set of microendoscopy images was collected from freshly resected porcine colon epithelium. The algorithm was then further validated on ex vivo image data collected from eight human subjects, selected from clinically normal appearing regions distant from grossly visible tumor in surgically resected colorectal tissue. QFEA has proven flexible in application to both mosaics and individual images, and its automated crypt detection sensitivity ranges from 71 to 94% despite intensity and contrast variation within the field of view. It also demonstrates the ability to detect and quantify differences in grossly normal regions among different subjects, suggesting the potential efficacy of this approach in detecting occult regions of dysplasia. PMID:27335893

  17. Underwater acoustic omnidirectional absorber

    NASA Astrophysics Data System (ADS)

    Naify, Christina J.; Martin, Theodore P.; Layman, Christopher N.; Nicholas, Michael; Thangawng, Abel L.; Calvo, David C.; Orris, Gregory J.

    2014-02-01

    Gradient index media, which are designed by varying local element properties in given geometry, have been utilized to manipulate acoustic waves for a variety of devices. This study presents a cylindrical, two-dimensional acoustic "black hole" design that functions as an omnidirectional absorber for underwater applications. The design features a metamaterial shell that focuses acoustic energy into the shell's core. Multiple scattering theory was used to design layers of rubber cylinders with varying filling fractions to produce a linearly graded sound speed profile through the structure. Measured pressure intensity agreed with predicted results over a range of frequencies within the homogenization limit.

  18. A novel non-linear recursive filter design for extracting high rate pulse features in nuclear medicine imaging and spectroscopy.

    PubMed

    Sajedi, Salar; Kamal Asl, Alireza; Ay, Mohammad R; Farahani, Mohammad H; Rahmim, Arman

    2013-06-01

    Applications in imaging and spectroscopy rely on pulse processing methods for appropriate data generation. Often, the particular method utilized does not highly impact data quality, whereas in some scenarios, such as in the presence of high count rates or high frequency pulses, this issue merits extra consideration. In the present study, a new approach for pulse processing in nuclear medicine imaging and spectroscopy is introduced and evaluated. The new non-linear recursive filter (NLRF) performs nonlinear processing of the input signal and extracts the main pulse characteristics, having the powerful ability to recover pulses that would ordinarily result in pulse pile-up. The filter design defines sampling frequencies lower than the Nyquist frequency. In the literature, for systems involving NaI(Tl) detectors and photomultiplier tubes (PMTs), with a signal bandwidth considered as 15 MHz, the sampling frequency should be at least 30 MHz (the Nyquist rate), whereas in the present work, a sampling rate of 3.3 MHz was shown to yield very promising results. This was obtained by exploiting the known shape feature instead of utilizing a general sampling algorithm. The simulation and experimental results show that the proposed filter enhances count rates in spectroscopy. With this filter, the system behaves almost identically as a general pulse detection system with a dead time considerably reduced to the new sampling time (300 ns). Furthermore, because of its unique feature for determining exact event times, the method could prove very useful in time-of-flight PET imaging.

  19. Early detection and classification of powdery mildew-infected rose leaves using ANFIS based on extracted features of thermal images

    NASA Astrophysics Data System (ADS)

    Jafari, Mehrnoosh; Minaei, Saeid; Safaie, Naser; Torkamani-Azar, Farah

    2016-05-01

    Spatial and temporal changes in surface temperature of infected and non-infected rose plant (Rosa hybrida cv. 'Angelina') leaves were visualized using digital infrared thermography. Infected areas exhibited a presymptomatic decrease in leaf temperature up to 2.3 °C. In this study, two experiments were conducted: one in the greenhouse (semi-controlled ambient conditions) and the other, in a growth chamber (controlled ambient conditions). Effect of drought stress and darkness on the thermal images were also studied in this research. It was found that thermal histograms of the infected leaves closely follow a standard normal distribution. They have a skewness near zero, kurtosis under 3, standard deviation larger than 0.6, and a Maximum Temperature Difference (MTD) more than 4. For each thermal histogram, central tendency, variability, and parameters of the best fitted Standard Normal and Laplace distributions were estimated. To classify healthy and infected leaves, feature selection was conducted and the best extracted thermal features with the largest linguistic hedge values were chosen. Among those features independent of absolute temperature measurement, MTD, SD, skewness, R2l, kurtosis and bn were selected. Then, a neuro-fuzzy classifier was trained to recognize the healthy leaves from the infected ones. The k-means clustering method was utilized to obtain the initial parameters and the fuzzy "if-then" rules. Best estimation rates of 92.55% and 92.3% were achieved in training and testing the classifier with 8 clusters. Results showed that drought stress had an adverse effect on the classification of healthy leaves. More healthy leaves under drought stress condition were classified as infected causing PPV and Specificity index values to decrease, accordingly. Image acquisition in the dark had no significant effect on the classification performance.

  20. Comment on "A geometric representation of spectral and temporal vowel features: quantification of vowel overlap in three linguistic varieties" [J. Acoust. Soc. Am. 119, 2334-2350 (2006)].

    PubMed

    Morrison, Geoffrey Stewart

    2008-01-01

    In a recent paper by Wassink [J. Acoust Soc. Am. 119, 2334-2350 (2006)] the spectral overlap assessment metric (SOAM) was proposed for quantifying the degree of acoustic overlap between vowels. The SOAM does not fully take account of probability densities. An alternative metric is proposed which is based on quadratic discriminant analysis and takes account of probability densities in the form of a posteriori probabilities. Unlike the SOAM, the a posteriori probability-based metric allows for a direct comparison of vowel overlaps calculated using different numbers of dimensions, e.g., three dimensions (Fl, F2, and duration) versus two dimensions (Fl and F2).

  1. Microfluidic device for acoustic cell lysis

    SciTech Connect

    Branch, Darren W.; Cooley, Erika Jane; Smith, Gennifer Tanabe; James, Conrad D.; McClain, Jaime L.

    2015-08-04

    A microfluidic acoustic-based cell lysing device that can be integrated with on-chip nucleic acid extraction. Using a bulk acoustic wave (BAW) transducer array, acoustic waves can be coupled into microfluidic cartridges resulting in the lysis of cells contained therein by localized acoustic pressure. Cellular materials can then be extracted from the lysed cells. For example, nucleic acids can be extracted from the lysate using silica-based sol-gel filled microchannels, nucleic acid binding magnetic beads, or Nafion-coated electrodes. Integration of cell lysis and nucleic acid extraction on-chip enables a small, portable system that allows for rapid analysis in the field.

  2. Acoustical standards in engineering acoustics

    NASA Astrophysics Data System (ADS)

    Burkhard, Mahlon D.

    2001-05-01

    The Engineering Acoustics Technical Committee is concerned with the evolution and improvement of acoustical techniques and apparatus, and with the promotion of new applications of acoustics. As cited in the Membership Directory and Handbook (2002), the interest areas include transducers and arrays; underwater acoustic systems; acoustical instrumentation and monitoring; applied sonics, promotion of useful effects, information gathering and transmission; audio engineering; acoustic holography and acoustic imaging; acoustic signal processing (equipment and techniques); and ultrasound and infrasound. Evident connections between engineering and standards are needs for calibration, consistent terminology, uniform presentation of data, reference levels, or design targets for product development. Thus for the acoustical engineer standards are both a tool for practices, for communication, and for comparison of his efforts with those of others. Development of many standards depends on knowledge of the way products are put together for the market place and acoustical engineers provide important input to the development of standards. Acoustical engineers and members of the Engineering Acoustics arm of the Society both benefit from and contribute to the Acoustical Standards of the Acoustical Society.

  3. Finite-difference time-domain analysis on light extraction in a GaN light-emitting diode by empirically capable dielectric nano-features

    NASA Astrophysics Data System (ADS)

    Park, ByeongChan; Noh, Heeso; Yu, Young Moon; Jang, Jae-Won

    2014-11-01

    Enhancement of light extraction in GaN light-emitting diode (LED) by addressing an array of nanomaterials is investigated by means of three dimensional (3D) finite-difference time-domain (FDTD) simulation experiments. The array of nanomaterials is placed on top of the GaN LED and is used as a light extraction layer. Depending on its empirically capable features, the refractive index of nanomaterials with perfectly spherical (particle) and hemispherical (plano-convex lens) shapes were decided as 1.47 [Polyethylene glycol (PEG)] and 2.13 [Zirconia (ZrO2)]. As a control experiment, a 3D FDTD simulation experiment of GaN LED with PEG film deposited on top is also carried out. Different light extraction profiles between subwavelength- and over-wavelength-scaled nanomaterials addressed GaN LEDs are observed in distributions of Poynting vector intensity of the light extraction layer-applied GaN LEDs. In addition, our results show that the dielectric effect on light extraction is more efficient in the light extraction layer with over-wavelength scaled features. In the case of a Zirconia particle array (ϕ = 500 nm) with hexagonal closed packed (hcp) structure on top of a GaN LED, light extraction along the normal axis of the LED surface is about six times larger than a GaN LED without the extraction layer.

  4. Acoustic Neuroma

    MedlinePlus

    An acoustic neuroma is a benign tumor that develops on the nerve that connects the ear to the brain. The tumor ... press against the brain, becoming life-threatening. Acoustic neuroma can be difficult to diagnose, because the symptoms ...

  5. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    NASA Astrophysics Data System (ADS)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  6. Acoustic Seal

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Inventor)

    2006-01-01

    The invention relates to a sealing device having an acoustic resonator. The acoustic resonator is adapted to create acoustic waveforms to generate a sealing pressure barrier blocking fluid flow from a high pressure area to a lower pressure area. The sealing device permits noncontacting sealing operation. The sealing device may include a resonant-macrosonic-synthesis (RMS) resonator.

  7. Acoustic seal

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M. (Inventor)

    2006-01-01

    The invention relates to a sealing device having an acoustic resonator. The acoustic resonator is adapted to create acoustic waveforms to generate a sealing pressure barrier blocking fluid flow from a high pressure area to a lower pressure area. The sealing device permits noncontacting sealing operation. The sealing device may include a resonant-macrosonic-synthesis (RMS) resonator.

  8. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction.

    PubMed

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-01-01

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures. PMID:26404305

  9. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  10. A 181 GOPS AKAZE Accelerator Employing Discrete-Time Cellular Neural Networks for Real-Time Feature Extraction

    PubMed Central

    Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun

    2015-01-01

    This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures. PMID:26404305

  11. Identification of error making patterns in lesion detection on digital breast tomosynthesis using computer-extracted image features

    NASA Astrophysics Data System (ADS)

    Wang, Mengyu; Zhang, Jing; Grimm, Lars J.; Ghate, Sujata V.; Walsh, Ruth; Johnson, Karen S.; Lo, Joseph Y.; Mazurowski, Maciej A.

    2016-03-01

    Digital breast tomosynthesis (DBT) can improve lesion visibility by eliminating the issue of overlapping breast tissue present in mammography. However, this new modality likely requires new approaches to training. The issue of training in DBT is not well explored. We propose a computer-aided educational approach for DBT training. Our hypothesis is that the trainees' educational outcomes will improve if they are presented with cases individually selected to address their weaknesses. In this study, we focus on the question of how to select such cases. Specifically, we propose an algorithm that based on previously acquired reading data predicts which lesions will be missed by the trainee for future cases (i.e., we focus on false negative error). A logistic regression classifier was used to predict the likelihood of trainee error and computer-extracted features were used as the predictors. Reader data from 3 expert breast imagers was used to establish the ground truth and reader data from 5 radiology trainees was used to evaluate the algorithm performance with repeated holdout cross validation. Receiver operating characteristic (ROC) analysis was applied to measure the performance of the proposed individual trainee models. The preliminary experimental results for 5 trainees showed the individual trainee models were able to distinguish the lesions that would be detected from those that would be missed with the average area under the ROC curve of 0.639 (95% CI, 0.580-0.698). The proposed algorithm can be used to identify difficult cases for individual trainees.

  12. Acoustic Imaging in Helioseismology

    NASA Astrophysics Data System (ADS)

    Chou, Dean-Yi; Chang, Hsiang-Kuang; Sun, Ming-Tsung; LaBonte, Barry; Chen, Huei-Ru; Yeh, Sheng-Jen; Team, The TON

    1999-04-01

    The time-variant acoustic signal at a point in the solar interior can be constructed from observations at the surface, based on the knowledge of how acoustic waves travel in the Sun: the time-distance relation of the p-modes. The basic principle and properties of this imaging technique are discussed in detail. The helioseismic data used in this study were taken with the Taiwan Oscillation Network (TON). The time series of observed acoustic signals on the solar surface is treated as a phased array. The time-distance relation provides the phase information among the phased array elements. The signal at any location at any time can be reconstructed by summing the observed signal at array elements in phase and with a proper normalization. The time series of the constructed acoustic signal contains information on frequency, phase, and intensity. We use the constructed intensity to obtain three-dimensional acoustic absorption images. The features in the absorption images correlate with the magnetic field in the active region. The vertical extension of absorption features in the active region is smaller in images constructed with shorter wavelengths. This indicates that the vertical resolution of the three-dimensional images depends on the range of modes used in constructing the signal. The actual depths of the absorption features in the active region may be smaller than those shown in the three-dimensional images.

  13. Satellite mapping and automated feature extraction: Geographic information system-based change detection of the Antarctic coast

    NASA Astrophysics Data System (ADS)

    Kim, Kee-Tae

    Declassified Intelligence Satellite Photograph (DISP) data are important resources for measuring the geometry of the coastline of Antarctica. By using the state-of-art digital imaging technology, bundle block triangulation based on tie points and control points derived from a RADARSAT-1 Synthetic Aperture Radar (SAR) image mosaic and Ohio State University (OSU) Antarctic digital elevation model (DEM), the individual DISP images were accurately assembled into a map quality mosaic of Antarctica as it appeared in 1963. The new map is one of important benchmarks for gauging the response of the Antarctic coastline to changing climate. Automated coastline extraction algorithm design is the second theme of this dissertation. At the pre-processing stage, an adaptive neighborhood filtering was used to remove the film-grain noise while preserving edge features. At the segmentation stage, an adaptive Bayesian approach to image segmentation was used to split the DISP imagery into its homogenous regions, in which the fuzzy c-means clustering (FCM) technique and Gibbs random field (GRF) model were introduced to estimate the conditional and prior probability density functions. A Gaussian mixture model was used to estimate the reliable initial values for the FCM technique. At the post-processing stage, image object formation and labeling, removal of noisy image objects, and vectorization algorithms were sequentially applied to segmented images for extracting a vector representation of coastlines. Results were presented that demonstrate the effectiveness of the algorithm in segmenting the DISP data. In the cases of cloud cover and little contrast scenes, manual editing was carried out based on intermediate image processing and visual inspection in comparison of old paper maps. Through a geographic information system (GIS), the derived DISP coastline data were integrated with earlier and later data to assess continental scale changes in the Antarctic coast. Computing the area of

  14. A UWB Radar Signal Processing Platform for Real-Time Human Respiratory Feature Extraction Based on Four-Segment Linear Waveform Model.

    PubMed

    Hsieh, Chi-Hsuan; Chiu, Yu-Fang; Shen, Yi-Hsiang; Chu, Ta-Shun; Huang, Yuan-Hao

    2016-02-01

    This paper presents an ultra-wideband (UWB) impulse-radio radar signal processing platform used to analyze human respiratory features. Conventional radar systems used in human detection only analyze human respiration rates or the response of a target. However, additional respiratory signal information is available that has not been explored using radar detection. The authors previously proposed a modified raised cosine waveform (MRCW) respiration model and an iterative correlation search algorithm that could acquire additional respiratory features such as the inspiration and expiration speeds, respiration intensity, and respiration holding ratio. To realize real-time respiratory feature extraction by using the proposed UWB signal processing platform, this paper proposes a new four-segment linear waveform (FSLW) respiration model. This model offers a superior fit to the measured respiration signal compared with the MRCW model and decreases the computational complexity of feature extraction. In addition, an early-terminated iterative correlation search algorithm is presented, substantially decreasing the computational complexity and yielding negligible performance degradation. These extracted features can be considered the compressed signals used to decrease the amount of data storage required for use in long-term medical monitoring systems and can also be used in clinical diagnosis. The proposed respiratory feature extraction algorithm was designed and implemented using the proposed UWB radar signal processing platform including a radar front-end chip and an FPGA chip. The proposed radar system can detect human respiration rates at 0.1 to 1 Hz and facilitates the real-time analysis of the respiratory features of each respiration period.

  15. Transition section for acoustic waveguides

    DOEpatents

    Karplus, H.H.B.

    1975-10-28

    A means of facilitating the transmission of acoustic waves with minimal reflection between two regions having different specific acoustic impedances is described comprising a region exhibiting a constant product of cross-sectional area and specific acoustic impedance at each cross-sectional plane along the axis of the transition region. A variety of structures that exhibit this feature is disclosed, the preferred embodiment comprising a nested structure of doubly reentrant cones. This structure is useful for monitoring the operation of nuclear reactors in which random acoustic signals are generated in the course of operation.

  16. Use of feature extraction techniques for the texture and context information in ERTS imagery: Spectral and textural processing of ERTS imagery. [classification of Kansas land use

    NASA Technical Reports Server (NTRS)

    Haralick, R. H. (Principal Investigator); Bosley, R. J.

    1974-01-01

    The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving fro