Science.gov

Sample records for adaptive feature extraction

  1. Adaptive spectral window sizes for feature extraction from optical spectra

    NASA Astrophysics Data System (ADS)

    Kan, Chih-Wen; Lee, Andy Y.; Pham, Nhi; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2008-02-01

    We propose an approach to adaptively adjust the spectral window size used to extract features from optical spectra. Previous studies have employed spectral features extracted by dividing the spectra into several spectral windows of a fixed width. However, the choice of spectral window size was arbitrary. We hypothesize that by adaptively adjusting the spectral window sizes, the trends in the data will be captured more accurately. Our method was tested on a diffuse reflectance spectroscopy dataset obtained in a study of oblique polarization reflectance spectroscopy of oral mucosa lesions. The diagnostic task is to classify lesions into one of four histopathology groups: normal, benign, mild dysplasia, or severe dysplasia (including carcinoma). Nine features were extracted from each of the spectral windows. We computed the area (AUC) under Receiver Operating Characteristic curve to select the most discriminatory wavelength intervals. We performed pairwise classifications using Linear Discriminant Analysis (LDA) with leave-one-out cross validation. The results showed that for discriminating benign lesions from mild or severe dysplasia, the adaptive spectral window size features achieved AUC of 0.84, while a fixed spectral window size of 20 nm had AUC of 0.71, and an AUC of 0.64 is achieved with a large window size containing all wavelengths. The AUCs of all feature combinations were also calculated. These results suggest that the new adaptive spectral window size method effectively extracts features that enable accurate classification of oral mucosa lesions.

  2. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  3. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  4. Cascade Classification with Adaptive Feature Extraction for Arrhythmia Detection.

    PubMed

    Park, Juyoung; Kang, Mingon; Gao, Jean; Kim, Younghoon; Kang, Kyungtae

    2017-01-01

    Detecting arrhythmia from ECG data is now feasible on mobile devices, but in this environment it is necessary to trade computational efficiency against accuracy. We propose an adaptive strategy for feature extraction that only considers normalized beat morphology features when running in a resource-constrained environment; but in a high-performance environment it takes account of a wider range of ECG features. This process is augmented by a cascaded random forest classifier. Experiments on data from the MIT-BIH Arrhythmia Database showed classification accuracies from 96.59% to 98.51%, which are comparable to state-of-the art methods.

  5. Feature extraction using adaptive multiwavelets and synthetic detection index for rotor fault diagnosis of rotating machinery

    NASA Astrophysics Data System (ADS)

    Lu, Na; Xiao, Zhihuai; Malik, O. P.

    2015-02-01

    State identification to diagnose the condition of rotating machinery is often converted to a classification problem of values of non-dimensional symptom parameters (NSPs). To improve the sensitivity of the NSPs to the changes in machine condition, a novel feature extraction method based on adaptive multiwavelets and the synthetic detection index (SDI) is proposed in this paper. Based on the SDI maximization principle, optimal multiwavelets are searched by genetic algorithms (GAs) from an adaptive multiwavelets library and used for extracting fault features from vibration signals. By the optimal multiwavelets, more sensitive NSPs can be extracted. To examine the effectiveness of the optimal multiwavelets, conventional methods are used for comparison study. The obtained NSPs are fed into K-means classifier to diagnose rotor faults. The results show that the proposed method can effectively improve the sensitivity of the NSPs and achieve a higher discrimination rate for rotor fault diagnosis than the conventional methods.

  6. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli.

    PubMed

    Kumar, Neeraj; Mutha, Pratik K

    2016-03-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function-perception-are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the "natural" sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.

  7. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  8. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  9. Adaptive Redundant Lifting Wavelet Transform Based on Fitting for Fault Feature Extraction of Roller Bearings

    PubMed Central

    Yang, Zijing; Cai, Ligang; Gao, Lixin; Wang, Huaqing

    2012-01-01

    A least square method based on data fitting is proposed to construct a new lifting wavelet, together with the nonlinear idea and redundant algorithm, the adaptive redundant lifting transform based on fitting is firstly stated in this paper. By variable combination selections of basis function, sample number and dimension of basis function, a total of nine wavelets with different characteristics are constructed, which are respectively adopted to perform redundant lifting wavelet transforms on low-frequency approximate signals at each layer. Then the normalized lP norms of the new node-signal obtained through decomposition are calculated to adaptively determine the optimal wavelet for the decomposed approximate signal. Next, the original signal is taken for subsection power spectrum analysis to choose the node-signal for single branch reconstruction and demodulation. Experiment signals and engineering signals are respectively used to verify the above method and the results show that bearing faults can be diagnosed more effectively by the method presented here than by both spectrum analysis and demodulation analysis. Meanwhile, compared with the symmetrical wavelets constructed with Lagrange interpolation algorithm, the asymmetrical wavelets constructed based on data fitting are more suitable in feature extraction of fault signal of roller bearings. PMID:22666035

  10. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    PubMed Central

    Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  11. Real-time feature extraction of P300 component using adaptive nonlinear principal component analysis

    PubMed Central

    2011-01-01

    Background The electroencephalography (EEG) signals are known to involve the firings of neurons in the brain. The P300 wave is a high potential caused by an event-related stimulus. The detection of P300s included in the measured EEG signals is widely investigated. The difficulties in detecting them are that they are mixed with other signals generated over a large brain area and their amplitudes are very small due to the distance and resistivity differences in their transmittance. Methods A novel real-time feature extraction method for detecting P300 waves by combining an adaptive nonlinear principal component analysis (ANPCA) and a multilayer neural network is proposed. The measured EEG signals are first filtered using a sixth-order band-pass filter with cut-off frequencies of 1 Hz and 12 Hz. The proposed ANPCA scheme consists of four steps: pre-separation, whitening, separation, and estimation. In the experiment, four different inter-stimulus intervals (ISIs) are utilized: 325 ms, 350 ms, 375 ms, and 400 ms. Results The developed multi-stage principal component analysis method applied at the pre-separation step has reduced the external noises and artifacts significantly. The introduced adaptive law in the whitening step has made the subsequent algorithm in the separation step to converge fast. The separation performance index has varied from -20 dB to -33 dB due to randomness of source signals. The robustness of the ANPCA against background noises has been evaluated by comparing the separation performance indices of the ANPCA with four algorithms (NPCA, NSS-JD, JADE, and SOBI), in which the ANPCA algorithm demonstrated the shortest iteration time with performance index about 0.03. Upon this, it is asserted that the ANPCA algorithm successfully separates mixed source signals. Conclusions The independent components produced from the observed data using the proposed method illustrated that the extracted signals were clearly the P300 components elicited by task

  12. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  13. Self adaptive multi-scale morphology AVG-Hat filter and its application to fault feature extraction for wheel bearing

    NASA Astrophysics Data System (ADS)

    Deng, Feiyue; Yang, Shaopu; Tang, Guiji; Hao, Rujiang; Zhang, Mingliang

    2017-04-01

    Wheel bearings are essential mechanical components of trains, and fault detection of the wheel bearing is of great significant to avoid economic loss and casualty effectively. However, considering the operating conditions, detection and extraction of the fault features hidden in the heavy noise of the vibration signal have become a challenging task. Therefore, a novel method called adaptive multi-scale AVG-Hat morphology filter (MF) is proposed to solve it. The morphology AVG-Hat operator not only can suppress the interference of the strong background noise greatly, but also enhance the ability of extracting fault features. The improved envelope spectrum sparsity (IESS), as a new evaluation index, is proposed to select the optimal filtering signal processed by the multi-scale AVG-Hat MF. It can present a comprehensive evaluation about the intensity of fault impulse to the background noise. The weighted coefficients of the different scale structural elements (SEs) in the multi-scale MF are adaptively determined by the particle swarm optimization (PSO) algorithm. The effectiveness of the method is validated by analyzing the real wheel bearing fault vibration signal (e.g. outer race fault, inner race fault and rolling element fault). The results show that the proposed method could improve the performance in the extraction of fault features effectively compared with the multi-scale combined morphological filter (CMF) and multi-scale morphology gradient filter (MGF) methods.

  14. Finite State Machine with Adaptive Electromyogram (EMG) Feature Extraction to Drive Meal Assistance Robot

    NASA Astrophysics Data System (ADS)

    Zhang, Xiu; Wang, Xingyu; Wang, Bei; Sugi, Takenao; Nakamura, Masatoshi

    Surface electromyogram (EMG) from elbow, wrist and hand has been widely used as an input of multifunction prostheses for many years. However, for patients with high-level limb deficiencies, muscle activities in upper-limbs are not strong enough to be used as control signals. In this paper, EMG from lower-limbs is acquired and applied to drive a meal assistance robot. An onset detection method with adaptive threshold based on EMG power is proposed to recognize different muscle contractions. Predefined control commands are output by finite state machine (FSM), and applied to operate the robot. The performance of EMG control is compared with joystick control by both objective and subjective indices. The results show that FSM provides the user with an easy-performing control strategy, which successfully operates robots with complicated control commands by limited muscle motions. The high accuracy and comfortableness of the EMG-control meal assistance robot make it feasible for users with upper limbs motor disabilities.

  15. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  16. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  17. Mono-component feature extraction for mechanical fault diagnosis using modified empirical wavelet transform via data-driven adaptive Fourier spectrum segment

    NASA Astrophysics Data System (ADS)

    Pan, Jun; Chen, Jinglong; Zi, Yanyang; Li, Yueming; He, Zhengjia

    2016-05-01

    Due to the multi-modulation feature in most of the vibration signals, the extraction of embedded fault information from condition monitoring data for mechanical fault diagnosis still is not a relaxed task. Despite the reported achievements, Wavelet transform follows the dyadic partition scheme and would not allow a data-driven frequency partition. And then Empirical Wavelet Transform (EWT) is used to extract inherent modulation information by decomposing signal into mono-components under an orthogonal basis and non-dyadic partition scheme. However, the pre-defined segment way of Fourier spectrum without dependence on analyzed signals may result in inaccurate mono-component identification. In this paper, the modified EWT (MEWT) method via data-driven adaptive Fourier spectrum segment is proposed for mechanical fault identification. First, inner product is calculated between the Fourier spectrum of analyzed signal and Gaussian function for scale representation. Then, adaptive spectrum segment is achieved by detecting local minima of the scale representation. Finally, empirical modes can be obtained by adaptively merging mono-components based on their envelope spectrum similarity. The adaptively extracted empirical modes are analyzed for mechanical fault identification. A simulation experiment and two application cases are used to verify the effectiveness of the proposed method and the results show its outstanding performance.

  18. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  19. Real time on-chip sequential adaptive principal component analysis for data feature extraction and image compression

    NASA Technical Reports Server (NTRS)

    Duong, T. A.

    2004-01-01

    In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.

  20. Feature Extraction Without Edge Detection

    DTIC Science & Technology

    1993-09-01

    feature? A.I. Memo 1356, MIT Artificial Intellegence Lab, April 1992. [65] W. A. Richards, B. Dawson, and D. Whittington. Encoding contour shape by...AD-A279 842 . " Technical Report 1434 --Feature Extraction Without Edge Detection Ronald D. Chane MIT Artificial .Intelligencc Laboratory ",, 𔃾•d...Chaney 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Massachusetts Institute of Technology Artificial

  1. Information based universal feature extraction

    NASA Astrophysics Data System (ADS)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  2. Galaxy Classification without Feature Extraction

    NASA Astrophysics Data System (ADS)

    Polsterer, K. L.; Gieseke, F.; Kramer, O.

    2012-09-01

    The automatic classification of galaxies according to the different Hubble types is a widely studied problem in the field of astronomy. The complexity of this task led to projects like Galaxy Zoo which try to obtain labeled data based on visual inspection by humans. Many automatic classification frameworks are based on artificial neural networks (ANN) in combination with a feature extraction step in the pre-processing phase. These approaches rely on labeled catalogs for training the models. The small size of the typically used training sets, however, limits the generalization performance of the resulting models. In this work, we present a straightforward application of support vector machines (SVM) for this type of classification tasks. The conducted experiments indicate that using a sufficient number of labeled objects provided by the EFIGI catalog leads to high-quality models. In contrast to standard approaches no additional feature extraction is required.

  3. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2004-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, recirculation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; iso-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for (co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  4. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  5. Wavelet Signal Processing for Transient Feature Extraction

    DTIC Science & Technology

    1992-03-15

    Research was conducted to evaluate the feasibility of applying Wavelets and Wavelet Transform methods to transient signal feature extraction problems... Wavelet transform techniques were developed to extract low dimensional feature data that allowed a simple classification scheme to easily separate

  6. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

    ... flag value. It is written in Interactive Data Language (IDL) as a callable procedure that receives as an argument a 16-bit ... Flag Extraction routine  (5 KB) Interactive Data Language (IDL) is available from  Exelis Visual Information Solutions . ...

  7. Object localization using adaptive feature selection

    NASA Astrophysics Data System (ADS)

    Hwang, S. Youngkyoo; Kim, Jungbae; Lee, Seongdeok

    2009-01-01

    'Fast and robust' are the most beautiful keywords in computer vision. Unfortunately they are in trade-off relationship. We present a method to have one's cake and eat it using adaptive feature selections. Our chief insight is that it compares reference patterns to query patterns, so that it selects smartly more important and useful features to find target. The probabilities of pixels in the query to belong to the target are calculated from importancy of features. Our framework has three distinct advantages: 1 - It saves computational cost dramatically to the conventional approach. This framework makes it possible to find location of an object in real-time. 2 - It can smartly select robust features of a reference pattern as adapting to a query pattern. 3- It has high flexibility on any feature. It doesn't matter which feature you may use. Lots of color space, texture, motion features and other features can fit perfectly only if the features meet histogram criteria.

  8. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  9. Feature Extraction Based on Decision Boundaries

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  10. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  11. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  12. Feature extraction for MRI segmentation.

    PubMed

    Velthuizen, R P; Hall, L O; Clarke, L P

    1999-04-01

    Magnetic resonance images (MRIs) of the brain are segmented to measure the efficacy of treatment strategies for brain tumors. To date, no reproducible technique for measuring tumor size is available to the clinician, which hampers progress of the search for good treatment protocols. Many segmentation techniques have been proposed, but the representation (features) of the MRI data has received little attention. A genetic algorithm (GA) search was used to discover a feature set from multi-spectral MRI data. Segmentations were performed using the fuzzy c-means (FCM) clustering technique. Seventeen MRI data sets from five patients were evaluated. The GA feature set produces a more accurate segmentation. The GA fitness function that achieves the best results is the Wilks's lambda statistic when applied to FCM clusters. Compared to linear discriminant analysis, which requires class labels, the same or better accuracy is obtained by the features constructed from a GA search without class labels, allowing fully operator independent segmentation. The GA approach therefore provides a better starting point for the measurement of the response of a brain tumor to treatment.

  13. Local feature point extraction for quantum images

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Lu, Kai; Xu, Kai; Gao, Yinghui; Wilson, Richard

    2015-05-01

    Quantum image processing has been a hot issue in the last decade. However, the lack of the quantum feature extraction method leads to the limitation of quantum image understanding. In this paper, a quantum feature extraction framework is proposed based on the novel enhanced quantum representation of digital images. Based on the design of quantum image addition and subtraction operations and some quantum image transformations, the feature points could be extracted by comparing and thresholding the gradients of the pixels. Different methods of computing the pixel gradient and different thresholds can be realized under this quantum framework. The feature points extracted from quantum image can be used to construct quantum graph. Our work bridges the gap between quantum image processing and graph analysis based on quantum mechanics.

  14. Texture Analysis and Cartographic Feature Extraction.

    DTIC Science & Technology

    1985-01-01

    Investigations into using various image descriptors as well as developing interactive feature extraction software on the Digital Image Analysis Laboratory...system. Originator-supplied keywords: Ad-Hoc image descriptor; Bayes classifier; Bhattachryya distance; Clustering; Digital Image Analysis Laboratory

  15. Selective Extraction of Entangled Textures via Adaptive PDE Transform

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-01-01

    Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE) transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method. PMID:22315584

  16. Selective Extraction of Entangled Textures via Adaptive PDE Transform.

    PubMed

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-01-01

    Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE) transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method.

  17. Automatic Extraction of Planetary Image Features

    NASA Technical Reports Server (NTRS)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  18. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  19. Extraction of essential features by quantum density

    NASA Astrophysics Data System (ADS)

    Wilinski, Artur

    2016-09-01

    In this paper we consider the problem of feature extraction, as an essential and important search of dataset. This problem describe the real ownership of the signals and images. Searches features are often difficult to identify because of data complexity and their redundancy. Here is shown a method of finding an essential features groups, according to the defined issues. To find the hidden attributes we use a special algorithm DQAL with the quantum density for thej-th features from original data, that indicates the important set of attributes. Finally, they have been generated small sets of attributes for subsets with different properties of features. They can be used to the construction of a small set of essential features. All figures were made in Matlab6.

  20. An Adaptive Feature Extractor for Gesture SEMG Recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Chen, Xiang; Zhao, Zhang-Yan; Li, Qiang; Yang, Ji-Hai; Lantz, Vuokko; Wang, Kong-Qiao

    This paper proposes an adaptive feature extraction method for pattern recognition of hand gesture action sEMG to enhance the reusability of myoelectric control. The feature extractor is based on wavelet packet transform and Local Discriminant Basis (LDB) algorithms to select several optimized decomposition subspaces of origin SEMG waveforms caused by hand gesture motions. Then the square roots of mean energy of signal in those subspaces are calculated to form the feature vector. In data acquisition experiments, five healthy subjects implement six kinds of hand motions every day for a week. The recognition results of hand gesture on the basis of the measured SEMG signals from different use sessions demonstrate that the feature extractor is effective. Our work is valuable for the realization of myoelectric control system in rehabilitation and other medical applications.

  1. On image matrix based feature extraction algorithms.

    PubMed

    Wang, Liwei; Wang, Xiao; Feng, Jufu

    2006-02-01

    Principal component analysis (PCA) and linear discriminant analysis (LDA) are two important feature extraction methods and have been widely applied in a variety of areas. A limitation of PCA and LDA is that when dealing with image data, the image matrices must be first transformed into vectors, which are usually of very high dimensionality. This causes expensive computational cost and sometimes the singularity problem. Recently two methods called two-dimensional PCA (2DPCA) and two-dimensional LDA (2DLDA) were proposed to overcome this disadvantage by working directly on 2-D image matrices without a vectorization procedure. The 2DPCA and 2DLDA significantly reduce the computational effort and the possibility of singularity in feature extraction. In this paper, we show that these matrices based 2-D algorithms are equivalent to special cases of image block based feature extraction, i.e., partition each image into several blocks and perform standard PCA or LDA on the aggregate of all image blocks. These results thus provide a better understanding of the 2-D feature extraction approaches.

  2. Extraction of linear features on SAR imagery

    NASA Astrophysics Data System (ADS)

    Liu, Junyi; Li, Deren; Mei, Xin

    2006-10-01

    Linear features are usually extracted from SAR imagery by a few edge detectors derived from the contrast ratio edge detector with a constant probability of false alarm. On the other hand, the Hough Transform is an elegant way of extracting global features like curve segments from binary edge images. Randomized Hough Transform can reduce the computation time and memory usage of the HT drastically. While Randomized Hough Transform will bring about a great deal of cells invalid during the randomized sample. In this paper, we propose a new approach to extract linear features on SAR imagery, which is an almost automatic algorithm based on edge detection and Randomized Hough Transform. The presented improved method makes full use of the directional information of each edge candidate points so as to solve invalid cumulate problems. Applied result is in good agreement with the theoretical study, and the main linear features on SAR imagery have been extracted automatically. The method saves storage space and computational time, which shows its effectiveness and applicability.

  3. Speech feature extracting based on DSP

    NASA Astrophysics Data System (ADS)

    Niu, Jingtao; Shi, Zhongke

    2003-09-01

    In this paper, for the voiced frame in the speech processing, the implementations of LPC prognosticate coefficient resolution by Levisohn-Durbin algorithm on the DSP based system was proposed, and also the implementation of L. R. Rabiner basic frequency estimation is discussed. At the end of this paper, several new methods of sound feature extraction only by voiced frame is also discussed.

  4. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  5. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  6. Impervious surface extraction using coupled spectral-spatial features

    NASA Astrophysics Data System (ADS)

    Yu, Xinju; Shen, Zhanfeng; Cheng, Xi; Xia, Liegang; Luo, Jiancheng

    2016-07-01

    Accurate extraction of urban impervious surface data from high-resolution imagery remains a challenging task because of the spectral heterogeneity of complex urban land-cover types. Since the high-resolution imagery simultaneously provides plentiful spectral and spatial features, the accurate extraction of impervious surfaces depends on effective extraction and integration of spectral-spatial multifeatures. Different features have different importance for determining a certain class; traditional multifeature fusion methods that treat all features equally during classification cannot utilize the joint effect of multifeatures fully. A fusion method of distance metric learning (DML) and support vector machines is proposed to find the impervious and pervious subclasses from Chinese ZiYuan-3 (ZY-3) imagery. In the procedure of finding appropriate spectral and spatial feature combinations with DML, optimized distance metric was obtained adaptively by learning from the similarity side-information generated from labeled samples. Compared with the traditional vector stacking method that used each feature equally for multifeatures fusion, the approach achieves an overall accuracy of 91.6% (4.1% higher than the prior one) for a suburban dataset, and an accuracy of 92.7% (3.4% higher) for a downtown dataset, indicating the effectiveness of the method for accurately extracting urban impervious surface data from ZY-3 imagery.

  7. Distributed feature extraction for event identification.

    SciTech Connect

    Berry, Nina M.; Ko, Teresa H.

    2004-05-01

    An important component of ubiquitous computing is the ability to quickly sense the dynamic environment to learn context awareness in real-time. To pervasively capture detailed information of movements, we present a decentralized algorithm for feature extraction within a wireless sensor network. By approaching this problem in a distributed manner, we are able to work within the real constraint of wireless battery power and its effects on processing and network communications. We describe a hardware platform developed for low-power ubiquitous wireless sensing and a distributed feature extraction methodology which is capable of providing more information to the user of events while reducing power consumption. We demonstrate how the collaboration between sensor nodes can provide a means of organizing large networks into information-based clusters.

  8. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  9. SU-E-J-257: A PCA Model to Predict Adaptive Changes for Head&neck Patients Based On Extraction of Geometric Features From Daily CBCT Datasets

    SciTech Connect

    Chetvertkov, M; Siddiqui, F; Chetty, I; Kim, J; Kumarasiri, A; Liu, C; Gordon, J

    2015-06-15

    Purpose: Using daily cone beam CTs (CBCTs) to develop principal component analysis (PCA) models of anatomical changes in head and neck (H&N) patients and to assess the possibility of using these prospectively in adaptive radiation therapy (ART). Methods: Planning CT (pCT) images of 4 H&N patients were deformed to model several different systematic changes in patient anatomy during the course of the radiation therapy (RT). A Pinnacle plugin was used to linearly interpolate the systematic change in patient for the 35 fraction RT course and to generate a set of 35 synthetic CBCTs. Each synthetic CBCT represents the systematic change in patient anatomy for each fraction. Deformation vector fields (DVFs) were acquired between the pCT and synthetic CBCTs with random fraction-to-fraction changes were superimposed on the DVFs. A patient-specific PCA model was built using these DVFs containing systematic plus random changes. It was hypothesized that resulting eigenDVFs (EDVFs) with largest eigenvalues represent the major anatomical deformations during the course of treatment. Results: For all 4 patients, the PCA model provided different results depending on the type and size of systematic change in patient’s body. PCA was more successful in capturing the systematic changes early in the treatment course when these were of a larger scale with respect to the random fraction-to-fraction changes in patient’s anatomy. For smaller scale systematic changes, random changes in patient could completely “hide” the systematic change. Conclusion: The leading EDVF from the patientspecific PCA models could tentatively be identified as a major systematic change during treatment if the systematic change is large enough with respect to random fraction-to-fraction changes. Otherwise, leading EDVF could not represent systematic changes reliably. This work is expected to facilitate development of population-based PCA models that can be used to prospectively identify significant

  10. Adaptive features of aquatic mammals' eye.

    PubMed

    Mass, Alla M; Supin, Alexander Ya

    2007-06-01

    The eye of aquatic mammals demonstrates several adaptations to both underwater and aerial vision. This study offers a review of eye anatomy in four groups of aquatic animals: cetaceans (toothed and baleen whales), pinnipeds (seals, sea lions, and walruses), sirenians (manatees and dugongs), and sea otters. Eye anatomy and optics, retinal laminar morphology, and topography of ganglion cell distribution are discussed with particular reference to aquatic specializations for underwater versus aerial vision. Aquatic mammals display emmetropia (i.e., refraction of light to focus on the retina) while submerged, and most have mechanisms to achieve emmetropia above water to counter the resulting aerial myopia. As underwater vision necessitates adjusting to wide variations in luminosity, iris muscle contractions create species-specific pupil shapes that regulate the amount of light entering the pupil and, in pinnipeds, work in conjunction with a reflective optic tapetum. The retina of aquatic mammals is similar to that of nocturnal terrestrial mammals in containing mainly rod photoreceptors and a minor number of cones (however, residual color vision may take place). A characteristic feature of the cetacean and pinniped retina is the large size of ganglion cells separated by wide intercellular spaces. Studies of topographic distribution of ganglion cells in the retina of cetaceans revealed two areas of ganglion cell concentration (the best-vision areas) located in the temporal and nasal quadrants; pinnipeds, sirenians, and sea otters have only one such area. In general, the visual system of marine mammals demonstrates a high degree of development and several specific features associated with adaptation for vision in both the aquatic and aerial environments.

  11. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  12. Breast image feature learning with adaptive deconvolutional networks

    NASA Astrophysics Data System (ADS)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  13. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  14. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  15. Concrete Slump Classification using GLCM Feature Extraction

    NASA Astrophysics Data System (ADS)

    Andayani, Relly; Madenda, Syarifudin

    2016-05-01

    Digital image processing technologies have been widely applies in analyzing concrete structure because the accuracy and real time result. The aim of this study is to classify concrete slump by using image processing technique. For this purpose, concrete mix design of 30 MPa compression strength designed with slump of 0-10 mm, 10-30 mm, 30-60 mm, and 60-180 mm were analysed. Image acquired by Nikon Camera D-7000 using high resolution was set up. In the first step RGB converted to greyimage than cropped to 1024 x 1024 pixel. With open-source program, cropped images to be analysed to extract GLCM feature. The result shows for the higher slump contrast getting lower, but higher correlation, energy, and homogeneity.

  16. Extraction and Classification of Human Gait Features

    NASA Astrophysics Data System (ADS)

    Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi

    In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.

  17. Morphological theory in image feature extraction

    NASA Astrophysics Data System (ADS)

    Gui, Feng; Lin, QiWei

    2003-06-01

    As we know that morphology is the technique that based upon set theory and it can be used for binary image processing and gray image processing. The principle and the geometrical meaning of morphological boundary detecting for image were discussed in this paper, and the selecting of structure element was analyzed. Comparison was made between morphological boundary detecting and traditional boundary detecting method, conclusion that morphological boundary detecting method has better compatibility and anti-interference capability was reached. The method was also used for L.V. cineangiograms processing. In this paper we hoped to build up a foundation for automatic detection of L.V. contours based on the features of L.V. cineangiograms and Morphological theory, for the further study of L.V. wall motion abnormalities, because wall motion abnormalities of L.V. due to myocardia ischeamia caused by coronary atherosclerosis is a significant feature of Atherosclerotic coronary heart disease (CHD). An algorithm that based on morphology for L.V. contours extracting was developed in this paper.

  18. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  19. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  20. Analysis of MABEL data for feature extraction

    NASA Astrophysics Data System (ADS)

    Magruder, L.; Neuenschwander, A. L.; Wharton, M.

    2011-12-01

    MABEL (Multiple Altimeter Beam Experimental Lidar) is a test bed representation for ICESat-2 with a high repetition rate, low laser pulse energy and photon-counting detection on an airborne platform. MABEL data can be scaled to simulate ICESat-2 data products and is a demonstration proving critical for model validation and algorithm development. The recent MABEL flights over White Sands Missile in New Mexico (WSMR) have provided especially useful insight for the potential processing schemes of this type of data as well as how to extract specific geophysical or passive optical features. Although the MABEL data has not been precisely geolocated to date, approximate geolocations were derived using interpolated GPS data and aircraft attitude. In addition to providing indication of expected signal response over specific types of terrain/targets, the availability of MABEL data has also facilitated preliminary development into new types of noise filtering for photon-counting data products that will contribute to capabilities associated with future capabilities for ICESat-2 data extraction. One particular useful methodology uses a combination of cluster weighting and neighbor-count weighting. For weighting clustered points, each individual point is tagged with an average distance to its neighbors within an established threshold. Histograms of the mean values are created for both a pure noise section and a signal-noise mixture section, and a deconvolution of these histograms gives a normal distribution for the signal. A fitted Gaussian is used to calculate a threshold for the average distances. This removes locally sparse points, so then a regular neighborhood-count filter is used for a larger search radius. It seems to work better with high-noise cases and allows for improved signal recovery without being computationally expensive. One specific MABEL nadir channel ground track provided returns from several distinct ground markers that included multiple mounds, an elevated

  1. Feature Adaptive Sampling for Scanning Electron Microscopy

    PubMed Central

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-01-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131

  2. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  3. Historical feature pattern extraction based network attack situation sensing algorithm.

    PubMed

    Zeng, Yong; Liu, Dacheng; Lei, Zhou

    2014-01-01

    The situation sequence contains a series of complicated and multivariate random trends, which are very sudden, uncertain, and difficult to recognize and describe its principle by traditional algorithms. To solve the above questions, estimating parameters of super long situation sequence is essential, but very difficult, so this paper proposes a situation prediction method based on historical feature pattern extraction (HFPE). First, HFPE algorithm seeks similar indications from the history situation sequence recorded and weighs the link intensity between occurred indication and subsequent effect. Then it calculates the probability that a certain effect reappears according to the current indication and makes a prediction after weighting. Meanwhile, HFPE method gives an evolution algorithm to derive the prediction deviation from the views of pattern and accuracy. This algorithm can continuously promote the adaptability of HFPE through gradual fine-tuning. The method preserves the rules in sequence at its best, does not need data preprocessing, and can track and adapt to the variation of situation sequence continuously.

  4. [Extraction method of the visual graphical feature from biomedical data].

    PubMed

    Li, Jing; Wang, Jinjia; Hong, Wenxue

    2011-10-01

    The vector space transformations such as principal component analysis (PCA), linear discriminant analysis (LDA), independent component analysis (ICA) or the kernel-based methods may be applied on the extracted feature from the field, which could improve the classification performance. A barycentre graphical feature extraction method of the star plot was proposed in the present study based on the graphical representation of multi-dimensional data. The feature order question of the graphical representation methods affecting the star plot was investigated and the feature order method was proposed based on the improved genetic algorithm (GA). For some biomedical datasets, such as breast cancer and diabetes, the obtained classification error of barycentre graphical feature of star plot in the GA based optimal feature order is very promising compared to the previously reported classification methods, and is superior to that of traditional feature extraction method.

  5. Adaptive skin segmentation via feature-based face detection

    NASA Astrophysics Data System (ADS)

    Taylor, Michael J.; Morris, Tim

    2014-05-01

    Variations in illumination can have significant effects on the apparent colour of skin, which can be damaging to the efficacy of any colour-based segmentation approach. We attempt to overcome this issue by presenting a new adaptive approach, capable of generating skin colour models at run-time. Our approach adopts a Viola-Jones feature-based face detector, in a moderate-recall, high-precision configuration, to sample faces within an image, with an emphasis on avoiding potentially detrimental false positives. From these samples, we extract a set of pixels that are likely to be from skin regions, filter them according to their relative luma values in an attempt to eliminate typical non-skin facial features (eyes, mouths, nostrils, etc.), and hence establish a set of pixels that we can be confident represent skin. Using this representative set, we train a unimodal Gaussian function to model the skin colour in the given image in the normalised rg colour space - a combination of modelling approach and colour space that benefits us in a number of ways. A generated function can subsequently be applied to every pixel in the given image, and, hence, the probability that any given pixel represents skin can be determined. Segmentation of the skin, therefore, can be as simple as applying a binary threshold to the calculated probabilities. In this paper, we touch upon a number of existing approaches, describe the methods behind our new system, present the results of its application to arbitrary images of people with detectable faces, which we have found to be extremely encouraging, and investigate its potential to be used as part of real-time systems.

  6. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  7. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA.

  8. Feature Extraction Using an Unsupervised Neural Network

    DTIC Science & Technology

    1991-05-03

    A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a

  9. Feature extraction of arc tracking phenomenon

    NASA Technical Reports Server (NTRS)

    Attia, John Okyere

    1995-01-01

    This document outlines arc tracking signals -- both the data acquisition and signal processing. The objective is to obtain the salient features of the arc tracking phenomenon. As part of the signal processing, the power spectral density is obtained and used in a MATLAB program.

  10. Universal Feature Extraction for Traffic Identification of the Target Category

    PubMed Central

    Shen, Jian

    2016-01-01

    Traffic identification of the target category is currently a significant challenge for network monitoring and management. To identify the target category with pertinence, a feature extraction algorithm based on the subset with highest proportion is presented in this paper. The method is proposed to be applied to the identification of any category that is assigned as the target one, but not restricted to certain specific category. We divide the process of feature extraction into two stages. In the stage of primary feature extraction, the feature subset is extracted from the dataset which has the highest proportion of the target category. In the stage of secondary feature extraction, the features that can distinguish the target and interfering categories are added to the feature subset. Our theoretical analysis and experimental observations reveal that the proposed algorithm is able to extract fewer features with greater identification ability of the target category. Moreover, the universality of the proposed algorithm proves to be available with the experiment that every category is set to be the target one. PMID:27832103

  11. Heuristical Feature Extraction from LIDAR Data and Their Visualization

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lohani, B.

    2011-09-01

    Extraction of landscape features from LiDAR data has been studied widely in the past few years. These feature extraction methodologies have been focussed on certain types of features only, namely the bare earth model, buildings principally containing planar roofs, trees and roads. In this paper, we present a methodology to process LiDAR data through DBSCAN, a density based clustering method, which extracts natural and man-made clusters. We then develop heuristics to process these clusters and simplify them to be sent to a visualization engine.

  12. Feature Extraction for Structural Dynamics Model Validation

    SciTech Connect

    Farrar, Charles; Nishio, Mayuko; Hemez, Francois; Stull, Chris; Park, Gyuhae; Cornwell, Phil; Figueiredo, Eloi; Luscher, D. J.; Worden, Keith

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  13. Adaptive feature-specific imaging: a face recognition example.

    PubMed

    Baheti, Pawan K; Neifeld, Mark A

    2008-04-01

    We present an adaptive feature-specific imaging (AFSI) system and consider its application to a face recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. Using sequential hypothesis testing, we compare AFSI with static-FSI (SFSI) and static or adaptive conventional imaging in terms of the number of measurements required to achieve a specified probability of misclassification (Pe). The AFSI system exhibits significant improvement compared to SFSI and conventional imaging at low signal-to-noise ratio (SNR). It is shown that for M=4 hypotheses and desired Pe=10(-2), AFSI requires 100 times fewer measurements than the adaptive conventional imager at SNR= -20 dB. We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage, resulting in an optimal value of integration time (equivalent to SNR) per measurement.

  14. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  15. Image feature extraction using Gabor-like transform

    NASA Technical Reports Server (NTRS)

    Finegan, Michael K., Jr.; Wee, William G.

    1991-01-01

    Noisy and highly textured images were operated on with a Gabor-like transform. The results were evaluated to see if useful features could be extracted using spatio-temporal operators. The use of spatio-temporal operators allows for extraction of features containing simultaneous frequency and orientation information. This method allows important features, both specific and generic, to be extracted from images. The transformation was applied to industrial inspection imagery, in particular, a NASA space shuttle main engine (SSME) system for offline health monitoring. Preliminary results are given and discussed. Edge features were extracted from one of the test images. Because of the highly textured surface (even after scan line smoothing and median filtering), the Laplacian edge operator yields many spurious edges.

  16. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  17. Dissociating conflict adaptation from feature integration: a multiple regression approach.

    PubMed

    Notebaert, Wim; Verguts, Tom

    2007-10-01

    Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls.

  18. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  19. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  20. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification.

    PubMed

    Baali, Hamza; Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J E

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain-computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling's [Formula: see text] statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%.

  1. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    ,

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  2. Feature extraction with LIDAR data and aerial images

    NASA Astrophysics Data System (ADS)

    Mao, Jianhua; Liu, Yanjing; Cheng, Penggen; Li, Xianhua; Zeng, Qihong; Xia, Jing

    2006-10-01

    Raw LIDAR data is a irregular spacing 3D point cloud including reflections from bare ground, buildings, vegetation and vehicles etc., and the first task of the data analyses of point cloud is feature extraction. However, the interpretability of LIDAR point cloud is often limited due to the fact that no object information is provided, and the complex earth topography and object morphology make it impossible for a single operator to classify all the point cloud precisely 100%. In this paper, a hierarchy method for feature extraction with LIDAR data and aerial images is discussed. The aerial images provide us information of objects figuration and spatial distribution, and hierarchic classification of features makes it easy to apply automatic filters progressively. And the experiment results show that, using this method, it was possible to detect more object information and get a better result of feature extraction than using automatic filters alone.

  3. New approach in features extraction for EEG signal detection.

    PubMed

    Guerrero-Mosquera, Carlos; Vazquez, Angel Navia

    2009-01-01

    This paper describes a new approach in features extraction using time-frequency distributions (TFDs) for detecting epileptic seizures to identify abnormalities in electroencephalogram (EEG). Particularly, the method extracts features using the Smoothed Pseudo Wigner-Ville distribution combined with the McAulay-Quatieri sinusoidal model and identifies abnormal neural discharges. We propose a new feature based on the length of the track that, combined with energy and frequency features, allows to isolate a continuous energy trace from another oscillations when an epileptic seizure is beginning. We evaluate our approach using data consisting of 16 different seizures from 6 epileptic patients. The results show that our extraction method is a suitable approach for automatic seizure detection, and opens the possibility of formulating new criteria to detect and analyze abnormal EEGs.

  4. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  5. Feature extraction for magnetic domain images of magneto-optical recording films using gradient feature segmentation

    NASA Astrophysics Data System (ADS)

    Quanqing, Zhu; Xinsai, Wang; Xuecheng, Zou; Haihua, Li; Xiaofei, Yang

    2002-07-01

    In this paper, we present a method to realize feature extraction on low contrast magnetic domain images of magneto-optical recording films. The method is based on the following three steps: first, Lee-filtering method is adopted to realize pre-filtering and noise reduction; this is followed by gradient feature segmentation, which separates the object area from the background area; finally the common linking method is adopted and the characteristic parameters of magnetic domain are calculated. We describe these steps with particular emphasis on the gradient feature segmentation. The results show that this method has advantages over other traditional ones for feature extraction of low contrast images.

  6. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2017-02-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  7. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  8. Adaptive Local Spatiotemporal Features from RGB-D Data for One-Shot Learning Gesture Recognition.

    PubMed

    Lin, Jia; Ruan, Xiaogang; Yu, Naigong; Yang, Yee-Hong

    2016-12-17

    Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal features from one or a few samples per gesture class. To tackle these problems, an adaptive local spatiotemporal feature (ALSTF) using fused RGB-D data is proposed. First, motion regions of interest (MRoIs) are adaptively extracted using grayscale and depth velocity variance information to greatly reduce the impact of noise. Then, corners are used as keypoints if their depth, and velocities of grayscale and of depth meet several adaptive local constraints in each MRoI. With further filtering of noise, an accurate and sufficient number of keypoints is obtained within the desired moving body parts (MBPs). Finally, four kinds of multiple descriptors are calculated and combined in extended gradient and motion spaces to represent the appearance and motion features of gestures. The experimental results on the ChaLearn gesture, CAD-60 and MSRDailyActivity3D datasets demonstrate that the proposed feature achieves higher performance compared with published state-of-the-art approaches under the one-shot learning setting and comparable accuracy under the leave-one-out cross validation.

  9. Adaptive Local Spatiotemporal Features from RGB-D Data for One-Shot Learning Gesture Recognition

    PubMed Central

    Lin, Jia; Ruan, Xiaogang; Yu, Naigong; Yang, Yee-Hong

    2016-01-01

    Noise and constant empirical motion constraints affect the extraction of distinctive spatiotemporal features from one or a few samples per gesture class. To tackle these problems, an adaptive local spatiotemporal feature (ALSTF) using fused RGB-D data is proposed. First, motion regions of interest (MRoIs) are adaptively extracted using grayscale and depth velocity variance information to greatly reduce the impact of noise. Then, corners are used as keypoints if their depth, and velocities of grayscale and of depth meet several adaptive local constraints in each MRoI. With further filtering of noise, an accurate and sufficient number of keypoints is obtained within the desired moving body parts (MBPs). Finally, four kinds of multiple descriptors are calculated and combined in extended gradient and motion spaces to represent the appearance and motion features of gestures. The experimental results on the ChaLearn gesture, CAD-60 and MSRDailyActivity3D datasets demonstrate that the proposed feature achieves higher performance compared with published state-of-the-art approaches under the one-shot learning setting and comparable accuracy under the leave-one-out cross validation. PMID:27999337

  10. Texture feature extraction methods for microcalcification classification in mammograms

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Pourabdollah-Nezhad, Siamak; Rafiee Rad, Farshid

    2000-06-01

    We present development, application, and performance evaluation of three different texture feature extraction methods for classification of benign and malignant microcalcifications in mammograms. The steps of the work accomplished are as follows. (1) A total of 103 regions containing microcalcifications were selected from a mammographic database. (2) For each region, texture features were extracted using three approaches: co-occurrence based method of Haralick; wavelet transformations; and multi-wavelet transformations. (3) For each set of texture features, most discriminating features and their optimal weights were found using a real-valued genetic algorithm (GA) and a training set. For each set of features and weights, a KNN classifier and a malignancy criterion were used to generate the corresponding ROC curve. The malignancy of a given sample was defined as the number of malignant neighbors among its K nearest neighbors. The GA found a population with the largest area under the ROC curve. (4) The best results obtained using each set of features were compared. The best set of features generated areas under the ROC curve ranging from 0.82 to 0.91. The multi-wavelet method outperformed the other two methods, and the wavelet features were superior to the Haralick features. Among the multi-wavelet methods, redundant initialization generated superior results compared to non-redundant initialization. For the best method, a true positive fraction larger than 0.85 and a false positive fraction smaller than 0.1 were obtained.

  11. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  12. Research on feature data extraction algorithms of printing

    NASA Astrophysics Data System (ADS)

    Sun, Zhihui; Ma, Jianzhuang

    2013-07-01

    The electric-carving printing ink cell image taken in complex lighting conditions with the traditional image processing algorithms can not be got the accurate edge information, so the feature data is not be accurately extracted. This paper use the improved P&M equation for ink cell image smoothing, while the eight-directions edge detection based Sobel is used for searching edge of ink cell, edge tracking algorithm make point of edge coordinate. These algorithms effectively reduce the influence of the unevenness light, accurately extract the feature data of the ink cell.

  13. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  14. Extraction of features from 3D laser scanner cloud data

    NASA Astrophysics Data System (ADS)

    Chan, Vincent H.; Bradley, Colin H.; Vickers, Geoffrey W.

    1997-12-01

    One of the road blocks on the path of automated reverse engineering has been the extraction of useful data from the copious range data generated from 3-D laser scanning systems. A method to extract the relevant features of a scanned object is presented. A 3-D laser scanner is automatically directed to obtain discrete laser cloud data on each separate patch that constitutes the object's surface. With each set of cloud data treated as a separate entity, primitives are fitted to the data resulting in a geometric and topologic database. Using a feed-forewarn neural network, the data is analyzed for geometric combinations that make up machine features such as through holes and slots. These features are required for the reconstruction of the solid model by a machinist or feature based CAM algorithms, thus completing the reverse engineering cycle.

  15. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  16. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  17. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  18. Data Feature Extraction for High-Rate 3-Phase Data

    SciTech Connect

    2016-10-18

    This algorithm processes high-rate 3-phase signals to identify the start time of each signal and estimate its envelope as data features. The start time and magnitude of each signal during the steady state is also extracted. The features can be used to detect abnormal signals. This algorithm is developed to analyze Exxeno's 3-phase voltage and current data recorded from refrigeration systems to detect device failure or degradation.

  19. Validation points generation for LiDAR-extracted hydrologic features

    NASA Astrophysics Data System (ADS)

    Felicen, M. M.; De La Cruz, R. M.; Olfindo, N. T.; Borlongan, N. J. B.; Ebreo, D. J. R.; Perez, A. M. C.

    2016-10-01

    This paper discusses a novel way of generating sampling points of hydrologic features, specifically streams, irrigation network and inland wetlands, that could provide a promising measure of accuracy using combinations of traditional statistical sampling methods. Traditional statistical sampling techniques such as simple random sampling, systematic sampling, stratified sampling and disproportionate random sampling were all designed to generate points in an area where all the cells are classified and subjected to actual field validation. However, these sampling techniques are not applicable when generating points along linear features. This paper presents the Weighted Disproportionate Stratified Systematic Random Sampling (WDSSRS), a tool that combines the systematic and disproportionate stratified random sampling methods in generating points for accuracy computation. This tool makes use of a map series boundary shapefile covering around 27 by 27 kilometers at a scale of 1:50000, and the LiDAR-extracted hydrologic features shapefiles (e.g. wetland polygons and linear features of stream and irrigation network). Using the map sheet shapefile, a 10 x 10 grid is generated, and grid cells with water and non-water features are tagged accordingly. Cells with water features are checked for the presence of intersecting linear features, and the intersections are given higher weights in the selection of validation points. The grid cells with non-intersecting linear features are then evaluated and the remaining points are generated randomly along these features. For grid cells with nonwater features, the sample points are generated randomly.

  20. Actively controlled multiple-sensor system for feature extraction

    NASA Astrophysics Data System (ADS)

    Daily, Michael J.; Silberberg, Teresa M.

    1991-08-01

    Typical vision systems which attempt to extract features from a visual image of the world for the purposes of object recognition and navigation are limited by the use of a single sensor and no active sensor control capability. To overcome limitations and deficiencies of rigid single sensor systems, more and more researchers are investigating actively controlled, multisensor systems. To address these problems, we have developed a self-calibrating system which uses active multiple sensor control to extract features of moving objects. A key problem in such systems is registering the images, that is, finding correspondences between images from cameras of differing focal lengths, lens characteristics, and positions and orientations. The authors first propose a technique which uses correlation of edge magnitudes for continuously calibrating pan and tilt angles of several different cameras relative to a single camera with a wide angle field of view, which encompasses the views of every other sensor. A simulation of a world of planar surfaces, visual sensors, and a robot platform used to test active control for feature extraction is then described. Motion in the field of view of at least one sensor is used to center the moving object for several sensors, which then extract object features such as color, boundary, and velocity from the appropriate sensors. Results are presented from real cameras and from the simulated world.

  1. Transfer Learning for Adaptive Relation Extraction

    DTIC Science & Technology

    2011-09-13

    37 3.4 Exploiting Tree Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.4.1 Motivation... Tree Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.6.2 High-order Semi-Markov Features...45 Appendices 47 A Algorithms for the Tree Model 47 A.1 Training

  2. Efficient and robust feature extraction by maximum margin criterion.

    PubMed

    Li, Haifeng; Jiang, Tao; Zhang, Keshu

    2006-01-01

    In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal component analysis (PCA) and linear discriminant analysis (LDA) are the two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features, and LDA is not stable due to the small sample size problem. In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient.

  3. Bilinear analysis for kernel selection and nonlinear feature extraction.

    PubMed

    Yang, Shu; Yan, Shuicheng; Zhang, Chao; Tang, Xiaoou

    2007-09-01

    This paper presents a unified criterion, Fisher + kernel criterion (FKC), for feature extraction and recognition. This new criterion is intended to extract the most discriminant features in different nonlinear spaces, and then, fuse these features under a unified measurement. Thus, FKC can simultaneously achieve nonlinear discriminant analysis and kernel selection. In addition, we present an efficient algorithm Fisher + kernel analysis (FKA), which utilizes the bilinear analysis, to optimize the new criterion. This FKA algorithm can alleviate the ill-posed problem existed in traditional kernel discriminant analysis (KDA), and usually, has no singularity problem. The effectiveness of our proposed algorithm is validated by a series of face-recognition experiments on several different databases.

  4. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  5. Complex Features in Lotka-Volterra Systems with Behavioral Adaptation

    NASA Astrophysics Data System (ADS)

    Tebaldi, Claudio; Lacitignola, Deborah

    Lotka-Volterra systems have played a fundamental role for mathematical modelling in many branches of theoretical biology and proved to describe, at least qualitatively, the essential features of many phenomena, see for example Murray [Murray 2002]. Furthermore models of that kind have been considered successfully also in quite different and less mathematically formalized context: Goodwin' s model of economic growth cycles [Goodwin 1967] and urban dynamics [Dendrinos 1992] are only two of a number of examples. Such systems can certainly be defined as complex ones and in fact the aim of modelling was essentially to clarify mechanims rather than to provide actual precise simulations and predictions. With regards to complex systems, we recall that one of their main feature, no matter of the specific definition one has in mind, is adaptation, i. e. the ability to adjust.

  6. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  7. FeatureExtract—extraction of sequence annotation made easy

    PubMed Central

    Wernersson, Rasmus

    2005-01-01

    Work on a large number of biological problems benefits tremendously from having an easy way to access the annotation of DNA sequence features, such as intron/exon structure, the contents of promoter regions and the location of other genes in upsteam and downstream regions. For example, taking the placement of introns within a gene into account can help in a phylogenetic analysis of homologous genes. Designing experiments for investigating UTR regions using PCR or DNA microarrays require knowledge of known elements in UTR regions and the positions and strandness of other genes nearby on the chromosome. A wealth of such information is already known and documented in databases such as GenBank and the NCBI Human Genome builds. However, it usually requires significant bioinformatics skills and intimate knowledge of the data format to access this information. Presented here is a highly flexible and easy-to-use tool for extracting feature annotation from GenBank entries. The tool is also useful for extracting datasets corresponding to a particular feature (e.g. promoters). Most importantly, the output data format is highly consistent, easy to handle for the user and easy to parse computationally. The FeatureExtract web server is freely available for both academic and commercial use at . PMID:15980537

  8. Nonlinear feature extraction for MMW image classification: a supervised approach

    NASA Astrophysics Data System (ADS)

    Maskall, Guy T.; Webb, Andrew R.

    2002-07-01

    The specular nature of Radar imagery causes problems for ATR as small changes to the configuration of targets can result in significant changes to the resulting target signature. This adds to the challenge of constructing a classifier that is both robust to changes in target configuration and capable of generalizing to previously unseen targets. Here, we describe the application of a nonlinear Radial Basis Function (RBF) transformation to perform feature extraction on millimeter-wave (MMW) imagery of target vehicles. The features extracted were used as inputs to a nearest-neighbor classifier to obtain measures of classification performance. The training of the feature extraction stage was by way of a loss function that quantified the amount of data structure preserved in the transformation to feature space. In this paper we describe a supervised extension to the loss function and explore the value of using the supervised training process over the unsupervised approach and compare with results obtained using a supervised linear technique (Linear Discriminant Analysis --- LDA). The data used were Inverse Synthetic Aperture Radar (ISAR) images of armored vehicles gathered at 94GHz and were categorized as Armored Personnel Carrier, Main Battle Tank or Air Defense Unit. We find that the form of supervision used in this work is an advantage when the number of features used for classification is low, with the conclusion that the supervision allows information useful for discrimination between classes to be distilled into fewer features. When only one example of each class is used for training purposes, the LDA results are comparable to the RBF results. However, when an additional example is added per class, the RBF results are significantly better than those from LDA. Thus, the RBF technique seems better able to make use of the extra knowledge available to the system about variability between different examples of the same class.

  9. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  10. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  11. Feature extraction on local jet space for texture classification

    NASA Astrophysics Data System (ADS)

    Oliveira, Marcos William da Silva; da Silva, Núbia Rosa; Manzanera, Antoine; Bruno, Odemir Martinez

    2015-12-01

    The proposal of this study is to analyze the texture pattern recognition over the local jet space looking forward to improve the texture characterization. Local jets decompose the image based on partial derivatives allowing the texture feature extraction be exploited in different levels of geometrical structures. Each local jet component evidences a different local pattern, such as, flat regions, directional variations and concavity or convexity. Subsequently, a texture descriptor is used to extract features from 0th, 1st and 2nd-derivative components. Four well-known databases (Brodatz, Vistex, Usptex and Outex) and four texture descriptors (Fourier descriptors, Gabor filters, Local Binary Pattern and Local Binary Pattern Variance) were used to validate the idea, showing in most cases an increase of the success rates.

  12. Optimal feature extraction for segmentation of Diesel spray images.

    PubMed

    Payri, Francisco; Pastor, José V; Palomares, Alberto; Juliá, J Enrique

    2004-04-01

    A one-dimensional simplification, based on optimal feature extraction, of the algorithm based on the likelihood-ratio test method (LRT) for segmentation in colored Diesel spray images is presented. If the pixel values of the Diesel spray and the combustion images are represented in RGB space, in most cases they are distributed in an area with a given so-called privileged direction. It is demonstrated that this direction permits optimal feature extraction for one-dimensional segmentation in the Diesel spray images, and some of its advantages compared with more-conventional one-dimensional simplification methods, including considerably reduced computational cost while accuracy is maintained within more than reasonable limits, are presented. The method has been successfully applied to images of Diesel sprays injected at room temperature as well as to images of sprays with evaporation and combustion. It has proved to be valid for several cameras and experimental arrangements.

  13. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  14. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  15. Blind Domain Adaptation With Augmented Extreme Learning Machine Features.

    PubMed

    Uzair, Muhammad; Mian, Ajmal

    2016-02-11

    In practical applications, the test data often have different distribution from the training data leading to suboptimal visual classification performance. Domain adaptation (DA) addresses this problem by designing classifiers that are robust to mismatched distributions. Existing DA algorithms use the unlabeled test data from target domain during training time in addition to the source domain data. However, target domain data may not always be available for training. We propose a blind DA algorithm that does not require target domain samples for training. For this purpose, we learn a global nonlinear extreme learning machine (ELM) model from the source domain data in an unsupervised fashion. The global ELM model is then used to initialize and learn class specific ELM models from the source domain data. During testing, the target domain features are augmented with the reconstructed features from the global ELM model. The resulting enriched features are then classified using the class specific ELM models based on minimum reconstruction error. Extensive experiments on 16 standard datasets show that despite blind learning, our method outperforms six existing state-of-the-art methods in cross domain visual recognition.

  16. Features and Ground Automatic Extraction from Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Costantino, D.; Angelini, M. G.

    2011-09-01

    The aim of the research has been the developing and implementing an algorithm for automated extraction of features from LIDAR scenes with varying terrain and coverage types. This applies the moment of third order (Skweness) and fourth order (Kurtosis). While the first has been applied in order to produce an initial filtering and data classification, the second, through the introduction of the weights of the measures, provided the desired results, which is a finer classification and less noisy. The process has been carried out in Matlab but to reduce processing time, given the large data density, the analysis has been limited at a mobile window. It was, therefore, arranged to produce subscenes in order to covers the entire area. The performance of the algorithm, confirm its robustness and goodness of results. Employment of effective processing strategies to improve the automation is a key to the implementation of this algorithm. The results of this work will serve the increased demand of automation for 3D information extraction using remotely sensed large datasets. After obtaining the geometric features from LiDAR data, we want to complete the research creating an algorithm to vector features and extraction of the DTM.

  17. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  18. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  19. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask.

  20. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  1. Pattern recognition and feature extraction with an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel

    2016-09-01

    Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.

  2. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  3. The optimal extraction of feature algorithm based on KAZE

    NASA Astrophysics Data System (ADS)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  4. [Bipedalism in birds, a determining feature for their adaptive success].

    PubMed

    Abourachid, Anick

    2006-01-01

    The birds are flying animals but they are also basically bipeds. The theropod dinosaurs, precursors of the birds, were already cursorial bipeds. Because the body structure was modelled by aerodynamical constraints during the evolution, all birds, even those that do not fly anymore, share a typical avian body shape. The osteological differences between birds are more adjustments than deep disruptions. Nevertheless, the birds are very diversified in their way of life and habitat. Yet, the hind limbs of the birds are surprisingly efficient in many manners, such as taking off, landing, swimming and walking. The limb structures adaptability to the various tasks require different mechanical fitness or device such as shock absorber during landing, or thrusters during tacking off. Moreover, almost all birds can walk, even if they have another locomotor specialization, as swimming or flying. Depending on the specialization, the gait features of the walk and the kinematics pattern are slightly modified. The functional adaptability of their hind limb structure may be a key to the evolutive success of the birds.

  5. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  6. Launch vehicle payload adapter design with vibration isolation features

    NASA Astrophysics Data System (ADS)

    Thomas, Gareth R.; Fadick, Cynthia M.; Fram, Bryan J.

    2005-05-01

    Payloads, such as satellites or spacecraft, which are mounted on launch vehicles, are subject to severe vibrations during flight. These vibrations are induced by multiple sources that occur between liftoff and the instant of final separation from the launch vehicle. A direct result of the severe vibrations is that fatigue damage and failure can be incurred by sensitive payload components. For this reason a payload adapter has been designed with special emphasis on its vibration isolation characteristics. The design consists of an annular plate that has top and bottom face sheets separated by radial ribs and close-out rings. These components are manufactured from graphite epoxy composites to ensure a high stiffness to weight ratio. The design is tuned to keep the frequency of the axial mode of vibration of the payload on the flexibility of the adapter to a low value. This is the main strategy adopted for isolating the payload from damaging vibrations in the intermediate to higher frequency range (45Hz-200Hz). A design challenge for this type of adapter is to keep the pitch frequency of the payload above a critical value in order to avoid dynamic interactions with the launch vehicle control system. This high frequency requirement conflicts with the low axial mode frequency requirement and this problem is overcome by innovative tuning of the directional stiffnesses of the composite parts. A second design strategy that is utilized to achieve good isolation characteristics is the use of constrained layer damping. This feature is particularly effective at keeping the responses to a minimum for one of the most important dynamic loading mechanisms. This mechanism consists of the almost-tonal vibratory load associated with the resonant burn condition present in any stage powered by a solid rocket motor. The frequency of such a load typically falls in the 45-75Hz range and this phenomenon drives the low frequency design of the adapter. Detailed finite element analysis is

  7. An Improved Approach of Mesh Segmentation to Extract Feature Regions

    PubMed Central

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons. PMID:26436657

  8. Line drawing extraction from gray level images by feature integration

    NASA Astrophysics Data System (ADS)

    Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.

    1994-10-01

    We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).

  9. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  10. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  11. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  12. Automatic Multimode Guided Wave Feature Extraction Using Wavelet Fingerprints

    NASA Astrophysics Data System (ADS)

    Bingham, J. P.; Hinders, M. K.

    2010-02-01

    The development of automatic guided wave interpretation for detecting corrosion in aluminum aircraft structural stringers is described. The dynamic wavelet fingerprint technique (DWFP) is used to render the guided wave mode information in two-dimensional binary images. Automatic algorithms then extract DWFP features that correspond to the distorted arrival times of the guided wave modes of interest, which give insight into changes of the structure in the propagation path. To better understand how the guided wave modes propagate through real structures, parallel-processing elastic wave simulations using the elastodynamic finite integration technique (EFIT) has been performed. 3D simulations are used to examine models too complex for analytical solutions. They produce informative visualizations of the guided wave modes in the structures, and mimic the output from sensors placed in the simulation space. Using the previously developed mode extraction algorithms, the 3D EFIT results are compared directly to their experimental counterparts.

  13. Features extraction from the electrocatalytic gas sensor responses

    NASA Astrophysics Data System (ADS)

    Kalinowski, Paweł; Woźniak, Łukasz; Stachowiak, Maria; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    One of the types of gas sensors used for detection and identification of toxic-air pollutant is an electro-catalytic gas sensor. The electro-catalytic sensors are working in cyclic voltammetry mode, enable detection of various gases. Their response are in the form of I-V curves which contain information about the type and the concentration of measured volatile compound. However, additional analysis is required to provide the efficient recognition of the target gas. Multivariate data analysis and pattern recognition methods are proven to be useful tool for such application, but further investigations on the improvement of the sensor's responses processing are required. In this article the method for extraction of the parameters from the electro-catalytic sensor responses is presented. Extracted features enable the significant reduction of data dimension without the loss of the efficiency of recognition of four volatile air-pollutant, namely nitrogen dioxide, ammonia, hydrogen sulfide and sulfur dioxide.

  14. Neural network based feature extraction scheme for heart rate variability

    NASA Astrophysics Data System (ADS)

    Raymond, Ben; Nandagopal, Doraisamy; Mazumdar, Jagan; Taverner, D.

    1995-04-01

    Neural networks are extensively used in solving a wide range of pattern recognition problems in signal processing. The accuracy of pattern recognition depends to a large extent on the quality of the features extracted from the signal. We present a neural network capable of extracting the autoregressive parameters of a cardiac signal known as hear rate variability (HRV). Frequency specific oscillations in the HRV signal represent heart rate regulatory activity and hence cardiovascular function. Continual monitoring and tracking of the HRV data over a period of time will provide valuable diagnostic information. We give an example of the network applied to a short HRV signal and demonstrate the tracking performance of the network with a single sinusoid embedded in white noise.

  15. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  16. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  17. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  18. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  19. A Stable Biologically Motivated Learning Mechanism for Visual Feature Extraction to Handle Facial Categorization

    PubMed Central

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task. PMID:22719892

  20. Visual-adaptation-mechanism based underwater object extraction

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Wang, Huibin; Xu, Lizhong; Shen, Jie

    2014-03-01

    Due to the major obstacles originating from the strong light absorption and scattering in a dynamic underwater environment, underwater optical information acquisition and processing suffer from effects such as limited range, non-uniform lighting, low contrast, and diminished colors, causing it to become the bottleneck for marine scientific research and projects. After studying and generalizing the underwater biological visual mechanism, we explore its advantages in light adaption which helps animals to precisely sense the underwater scene and recognize their prey or enemies. Then, aiming to transform the significant advantage of the visual adaptation mechanism into underwater computer vision tasks, a novel knowledge-based information weighting fusion model is established for underwater object extraction. With this bionic model, the dynamical adaptability is given to the underwater object extraction task, making them more robust to the variability of the optical properties in different environments. The capability of the proposed method to adapt to the underwater optical environments is shown, and its outperformance for the object extraction is demonstrated by comparison experiments.

  1. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  2. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  3. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    PubMed Central

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  4. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  5. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  6. [A spatial adaptive algorithm for endmember extraction on multispectral remote sensing image].

    PubMed

    Zhu, Chang-Ming; Luo, Jian-Cheng; Shen, Zhan-Feng; Li, Jun-Li; Hu, Xiao-Dong

    2011-10-01

    Due to the problem that the convex cone analysis (CCA) method can only extract limited endmember in multispectral imagery, this paper proposed a new endmember extraction method by spatial adaptive spectral feature analysis in multispectral remote sensing image based on spatial clustering and imagery slice. Firstly, in order to remove spatial and spectral redundancies, the principal component analysis (PCA) algorithm was used for lowering the dimensions of the multispectral data. Secondly, iterative self-organizing data analysis technology algorithm (ISODATA) was used for image cluster through the similarity of the pixel spectral. And then, through clustering post process and litter clusters combination, we divided the whole image data into several blocks (tiles). Lastly, according to the complexity of image blocks' landscape and the feature of the scatter diagrams analysis, the authors can determine the number of endmembers. Then using hourglass algorithm extracts endmembers. Through the endmember extraction experiment on TM multispectral imagery, the experiment result showed that the method can extract endmember spectra form multispectral imagery effectively. What's more, the method resolved the problem of the amount of endmember limitation and improved accuracy of the endmember extraction. The method has provided a new way for multispectral image endmember extraction.

  7. EEMD Independent Extraction for Mixing Features of Rotating Machinery Reconstructed in Phase Space

    PubMed Central

    Ma, Zaichao; Wen, Guangrui; Jiang, Cheng

    2015-01-01

    Empirical Mode Decomposition (EMD), due to its adaptive decomposition property for the non-linear and non-stationary signals, has been widely used in vibration analyses for rotating machinery. However, EMD suffers from mode mixing, which is difficult to extract features independently. Although the improved EMD, well known as the ensemble EMD (EEMD), has been proposed, mode mixing is alleviated only to a certain degree. Moreover, EEMD needs to determine the amplitude of added noise. In this paper, we propose Phase Space Ensemble Empirical Mode Decomposition (PSEEMD) integrating Phase Space Reconstruction (PSR) and Manifold Learning (ML) for modifying EEMD. We also provide the principle and detailed procedure of PSEEMD, and the analyses on a simulation signal and an actual vibration signal derived from a rubbing rotor are performed. The results show that PSEEMD is more efficient and convenient than EEMD in extracting the mixing features from the investigated signal and in optimizing the amplitude of the necessary added noise. Additionally PSEEMD can extract the weak features interfered with a certain amount of noise. PMID:25871723

  8. Feature extraction of kernel regress reconstruction for fault diagnosis based on self-organizing manifold learning

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Xu, Guanghua; Liu, Dan

    2013-09-01

    The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension. Currently, nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings, such as manifold learning. However, these methods are all based on manual intervention, which have some shortages in stability, and suppressing the disturbance noise. To extract features automatically, a manifold learning method with self-organization mapping is introduced for the first time. Under the non-uniform sample distribution reconstructed by the phase space, the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation. Finally, the signal is reconstructed by the kernel regression. Several typical states include the Lorenz system, engine fault with piston pin defect, and bearing fault with outer-race defect are analyzed. Compared with the LTSA and continuous wavelet transform, the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified. A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.

  9. A bio-inspired feature extraction for robust speech recognition.

    PubMed

    Zouhir, Youssef; Ouni, Kaïs

    2014-01-01

    In this paper, a feature extraction method for robust speech recognition in noisy environments is proposed. The proposed method is motivated by a biologically inspired auditory model which simulates the outer/middle ear filtering by a low-pass filter and the spectral behaviour of the cochlea by the Gammachirp auditory filterbank (GcFB). The speech recognition performance of our method is tested on speech signals corrupted by real-world noises. The evaluation results show that the proposed method gives better recognition rates compared to the classic techniques such as Perceptual Linear Prediction (PLP), Linear Predictive Coding (LPC), Linear Prediction Cepstral coefficients (LPCC) and Mel Frequency Cepstral Coefficients (MFCC). The used recognition system is based on the Hidden Markov Models with continuous Gaussian Mixture densities (HMM-GM).

  10. Extracting autofluorescence spectral features for diagnosis of nasopharyngeal carcinoma

    NASA Astrophysics Data System (ADS)

    Lin, L. S.; Yang, F. W.; Xie, S. S.

    2012-09-01

    The aim of this study is to investigate the autofluorescence spectral characteristics of normal and cancerous nasopharyngeal tissues and to extract the potential spectral features for diagnosis of nasopharyngeal carcinoma (NPC). The autofluorescence excitation-emission matrix (EEM) of 37 normal and 34 cancerous nasopharyngeal tissues were recorded by a FLS920 spectrofluorimeter system in vitro. Based on the alteration in proportions of collagen and NAD(P)H, the integrated fluorescence intensity of I 455 ± 10 nm and I 380 ± 10 nm were used to calculated the ratio values by a two-peak ratio algorithm to diagnose NPC tissues at 340 nm excited. Furthermore by applying the receiver operating characteristic curve (ROC), the 340 nm excitation yielded an average sensitivity and specificity of 88.2 and 91.9%, respectively. These results may have practical implications for diagnosis of NPC.

  11. Crown Features Extraction from Low Altitude AVIRIS Data

    NASA Astrophysics Data System (ADS)

    Ogunjemiyo, S. O.; Roberts, D.; Ustin, S.

    2005-12-01

    Automated tree recognition and crown delineations are computer-assisted procedures for identifying individual trees and segmenting their crown boundaries on digital imagery. The success of the procedures is dependent on the quality of the image data and the physiognomy of the stand as evidence by previous studies, which have all used data with spatial resolution less than 1 m and average crown diameter to pixel size ratio greater than 4. In this study we explored the prospect of identifying individual tree species and extracting crown features from low altitude AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) data with spatial resolution of 4 m. The test site is a Douglas-fir and Western hemlock dominated old-growth conifer forest in the Pacific Northwest with average crown diameter of 12 m, which translates to a crown diameter pixel ratio less than 4 m; the lowest value ever used in similar studies. The analysis was carried out using AVIRIS reflectance imagery in the NIR band centered at 885 nm wavelength. The analysis required spatial filtering of the reflectance imagery followed by application of a tree identification algorithm based on maximum filter technique. For every identified tree location a crown polygon was delineated by applying crown segmentation algorithm. Each polygon boundary was characterized by a loop connecting pixels that were geometrically determined to define the crown boundary. Crown features were extracted based on the area covered by the polygons, and they include crown diameters, average distance between crowns, species spectral, pixel brightness at the identified tree locations, average brightness of pixels enclosed by the crown boundary and within crown variation in pixel brightness. Comparison of the results with ground reference data showed a high correlation between the two datasets and highlights the potential of low altitude AVIRIS data to provide the means to improve forest management and practices and estimates of critical

  12. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  13. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  14. A Joint Time-Frequency and Matrix Decomposition Feature Extraction Methodology for Pathological Voice Classification

    NASA Astrophysics Data System (ADS)

    Ghoraani, Behnaz; Krishnan, Sridhar

    2009-12-01

    The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.

  15. Cloud Detection Method Based on Feature Extraction in Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Changhui, Y.; Yuan, Y.; Minjing, M.; Menglu, Z.

    2013-05-01

    In remote sensing images, the existence of the clouds has a great impact on the image quality and subsequent image processing, as the images covered with clouds contain little useful information. Therefore, the detection and recognition of clouds is one of the major problems in the application of remote sensing images. Present there are two categories of method to cloud detection. One is setting spectrum thresholds based on the characteristics of the clouds to distinguish them. However, the instability and uncertainty of the practical clouds makes this kind of method complexity and weak adaptability. The other method adopts the features in the images to identify the clouds. Since there will be significant overlaps in some features of the clouds and grounds, the detection result is highly dependent on the effectiveness of the features. This paper presented a cloud detection method based on feature extraction for remote sensing images. At first, find out effective features through training pattern, the features are selected from gray, frequency and texture domains. The different features in the three domains of the training samples are calculated. Through the result of statistical analysis of all the features, the useful features are picked up to form a feature set. In concrete, the set includes three feature vectors, respectively, the gray feature vector constituted of average gray, variance, first-order difference, entropy and histogram, the frequency feature vector constituted of DCT high frequency coefficient and wavelet high frequency coefficient, and the texture feature vector constituted of the hybrid entropy and difference of the gray-gradient co-occurrence matrix and the image fractal dimension. Secondly, a thumbnail will be obtained by down sampling the original image and its features of gray, frequency and texture are computed. Last but not least, the cloud region will be judged by the comparison between the actual feature values and the thresholds

  16. Rolling bearing feature frequency extraction using extreme average envelope decomposition

    NASA Astrophysics Data System (ADS)

    Shi, Kunju; Liu, Shulin; Jiang, Chao; Zhang, Hongli

    2016-09-01

    The vibration signal contains a wealth of sensitive information which reflects the running status of the equipment. It is one of the most important steps for precise diagnosis to decompose the signal and extracts the effective information properly. The traditional classical adaptive signal decomposition method, such as EMD, exists the problems of mode mixing, low decomposition accuracy etc. Aiming at those problems, EAED(extreme average envelope decomposition) method is presented based on EMD. EAED method has three advantages. Firstly, it is completed through midpoint envelopment method rather than using maximum and minimum envelopment respectively as used in EMD. Therefore, the average variability of the signal can be described accurately. Secondly, in order to reduce the envelope errors during the signal decomposition, replacing two envelopes with one envelope strategy is presented. Thirdly, the similar triangle principle is utilized to calculate the time of extreme average points accurately. Thus, the influence of sampling frequency on the calculation results can be significantly reduced. Experimental results show that EAED could separate out single frequency components from a complex signal gradually. EAED could not only isolate three kinds of typical bearing fault characteristic of vibration frequency components but also has fewer decomposition layers. EAED replaces quadratic enveloping to an envelope which ensuring to isolate the fault characteristic frequency under the condition of less decomposition layers. Therefore, the precision of signal decomposition is improved.

  17. Extracting energetically dominant flow features in a complicated fish wake using singular-value decomposition

    NASA Astrophysics Data System (ADS)

    Ting, Shang-Chieh; Yang, Jing-Tang

    2009-04-01

    We developed a method to extract the energetically dominant flow features in a complicated fish wake according to an energetic point of view, and applied singular-value decomposition (SVD) to two-dimensional instantaneous fluid velocity, vorticity and λ2 (vortex-detector) data. We demonstrate the effectiveness and merits of the use of SVD through an example regarding the wake of a fish executing a fast-start turn. The energy imparted into the water by a swimming fish is captured and portrayed through SVD. The analysis and interpretation of complicated data for the fish wake are greatly improved, and thus help to characterize more accurately a complicated fish wake. The velocity vectors and Galilean invariants (i.e., vorticity and λ2) resulting from SVD extraction are significantly helpful in recognizing the energetically dominant large-scale flow features. To obtain successful SVD extractions, we propose useful criteria based on the Froude propulsion efficiency, which is biologically and physically related. We also introduce a novel and useful method to deduce the topology of dominant flow motions in an instantaneous fish flow field, which is based on combined use of the topological critical-point theory and SVD. The concept and approach proposed in this work are useful and adaptable in biomimetic and biomechanical research concerning the fluid dynamics of a self-propelled body.

  18. Automatic archaeological feature extraction from satellite VHR images

    NASA Astrophysics Data System (ADS)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  19. PSO based Gabor wavelet feature extraction and tracking method

    NASA Astrophysics Data System (ADS)

    Sun, Hongguang; Bu, Qian; Zhang, Huijie

    2008-12-01

    The paper is the study of 2D Gabor wavelet and its application in grey image target recognition and tracking. The new optimization algorithms and technologies in the system realization are studied and discussed in theory and practice. Optimization of Gabor wavelet's parameters of translation, orientation, and scale is used to make it approximates a local image contour region. The method of Sobel edge detection is used to get the initial position and orientation value of optimization in order to improve the convergence speed. In the wavelet characteristic space, we adopt PSO (particle swarm optimization) algorithm to identify points on the security border of the system, it can ensure reliable convergence of the target, which can improve convergence speed; the time of feature extraction is shorter. By test in low contrast image, the feasibility and effectiveness of the algorithm are demonstrated by VC++ simulation platform in experiments. Adopting improve Gabor wavelet method in target tracking and making up its frame of tracking, which realize moving target tracking used algorithm, and realize steady target tracking in circumrotate affine distortion.

  20. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  1. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  2. Feature extraction and models for speech: An overview

    NASA Astrophysics Data System (ADS)

    Schroeder, Manfred

    2002-11-01

    Modeling of speech has a long history, beginning with Count von Kempelens 1770 mechanical speaking machine. Even then human vowel production was seen as resulting from a source (the vocal chords) driving a physically separate resonator (the vocal tract). Homer Dudley's 1928 frequency-channel vocoder and many of its descendants are based on the same successful source-filter paradigm. For linguistic studies as well as practical applications in speech recognition, compression, and synthesis (see M. R. Schroeder, Computer Speech), the extant models require the (often difficult) extraction of numerous parameters such as the fundamental and formant frequencies and various linguistic distinctive features. Some of these difficulties were obviated by the introduction of linear predictive coding (LPC) in 1967 in which the filter part is an all-pole filter, reflecting the fact that for non-nasalized vowels the vocal tract is well approximated by an all-pole transfer function. In the now ubiquitous code-excited linear prediction (CELP), the source-part is replaced by a code book which (together with a perceptual error criterion) permits speech compression to very low bit rates at high speech quality for the Internet and cell phones.

  3. Digital image comparison using feature extraction and luminance matching

    NASA Astrophysics Data System (ADS)

    Bachnak, Ray A.; Steidley, Carl W.; Funtanilla, Jeng

    2005-03-01

    This paper presents the results of comparing two digital images acquired using two different light sources. One of the sources is a 50-W metal halide lamp located in the compartment of an industrial borescope and the other is a 1 W LED placed at the tip of the insertion tube of the borescope. The two images are compared quantitatively and qualitatively using feature extraction and luminance matching approaches. Quantitative methods included the images' histograms, intensity profiles along a line segment, edges, and luminance measurement. Qualitative methods included image registration and linear conformal transformation with eight control points. This transformation is useful when shapes in the input image are unchanged, but the image is distorted by some combination of translation, rotation, and scaling. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by the operator. The paper presents the results and discusses the usefulness and shortcomings of various comparison methods.

  4. Ice images processing interface for automatic features extraction

    NASA Astrophysics Data System (ADS)

    Tardif, Pierre M.

    2001-02-01

    Canadian Coast Guard has the mandate to maintain the navigability of the St.-Lawrence seaway. It must prevent ice jam formation. Radar, sonar sensors and cameras are used to verify ice movement and keep a record of pertinent data. The cameras are placed along the seaway at strategic locations. Images are processed and saved for future reference. The Ice Images Processing Interface (IIPI) is an integral part of Ices Integrated System (IIS). This software processes images to extract the ice speed, concentration, roughness, and rate of flow. Ice concentration is computed from image segmentation using color models and a priori information. Speed is obtained from a region-matching algorithm. Both concentration and speed calculations are complex, since they require a calibration step involving on-site measurements. Color texture features provide ice roughness estimation. Rate of flow uses ice thickness, which is estimated from sonar sensors on the river floor. Our paper will present how we modeled and designed the IIPI, the issues involved and its future. For more reliable results, we suggest that meteorological data be provided, change in camera orientation be changed, sun reflections be anticipated, and more a priori information, such as radar images available at some sites, be included.

  5. Extracting Feature Points of the Human Body Using the Model of a 3D Human Body

    NASA Astrophysics Data System (ADS)

    Shin, Jeongeun; Ozawa, Shinji

    The purpose of this research is to recognize 3D shape features of a human body automatically using a 3D laser-scanning machine. In order to recognize the 3D shape features, we selected the 23 feature points of a body and modeled its 3D features. The set of 23 feature points consists of the motion axis of a joint, the main point for the bone structure of a human body. For extracting feature points of object model, we made 2.5D templates neighbor for each feature points were extracted according to the feature points of the standard model of human body. And the feature points were extracted by the template matching. The extracted feature points can be applied as body measurement, the 3D virtual fitting system for apparel etc.

  6. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  7. A Hybrid Neural Network and Feature Extraction Technique for Target Recognition.

    DTIC Science & Technology

    target features are extracted, the extracted data being evaluated in an artificial neural network to identify a target at a location within the image scene from which the different viewing angles extend.

  8. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  9. Neural Detection of Malicious Network Activities Using a New Direct Parsing and Feature Extraction Technique

    DTIC Science & Technology

    2015-09-01

    NETWORK ACTIVITIES USING A NEW DIRECT PARSING AND FEATURE EXTRACTION TECHNIQUE by Cheng Hong Low September 2015 Thesis Advisor: Phillip Pace Co...FEATURE EXTRACTION TECHNIQUE 5. FUNDING NUMBERS 6. AUTHOR(S) Low, Cheng Hong 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Center for...FEATURE EXTRACTION TECHNIQUE Cheng Hong Low Civlian, ST Aerospace, Singapore M.Sc., National University of Singapore, 2012 Submitted in

  10. Credible Set Estimation, Analysis, and Applications in Synthetic Aperture Radar Canonical Feature Extraction

    DTIC Science & Technology

    2015-03-26

    CREDIBLE SET ESTIMATION, ANALYSIS, AND APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Andrew C. Rexford, 1st Lieutenant...AND APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer...APPLICATIONS IN SYNTHETIC APERTURE RADAR CANONICAL FEATURE EXTRACTION THESIS Andrew C. Rexford, B.S.E.E. 1st Lieutenant, USAF Committee Membership: Dr. Julie

  11. Model-Based Nonrigid Motion Analysis Using Natural Feature Adaptive Mesh

    SciTech Connect

    Zhang, Y.; Goldgof, D.B.; Sarkar, S.; Tsap, L.V.

    2000-04-25

    The success of nonrigid motion analysis using physical finite element model is dependent on the mesh that characterizes the object's geometric structure. We suggest a deformable mesh adapted to the natural features of images. The adaptive mesh requires much fewer number of nodes than the fixed mesh which was used in our previous work. We demonstrate the higher efficiency of the adaptive mesh in the context of estimating burn scar elasticity relative to normal skin elasticity using the observed 2D image sequence. Our results show that the scar assessment method based on the physical model using natural feature adaptive mesh can be applied to images which do not have artificial markers.

  12. Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor

    NASA Astrophysics Data System (ADS)

    Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony

    2015-03-01

    Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.

  13. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  14. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  15. Extracting features from protein sequences to improve deep extreme learning machine for protein fold recognition.

    PubMed

    Ibrahim, Wisam; Abadeh, Mohammad Saniee

    2017-03-27

    Protein fold recognition is an important problem in bioinformatics to predict three-dimensional structure of a protein. One of the most challenging tasks in protein fold recognition problem is the extraction of efficient features from the amino-acid sequences to obtain better classifiers. In this paper, we have proposed six descriptors to extract features from protein sequences. These descriptors are applied in the first stage of a three-stage framework PCA-DELM-LDA to extract feature vectors from the amino-acid sequences. Principal Component Analysis PCA has been implemented to reduce the number of extracted features. The extracted feature vectors have been used with original features to improve the performance of the Deep Extreme Learning Machine DELM in the second stage. Four new features have been extracted from the second stage and used in the third stage by Linear Discriminant Analysis LDA to classify the instances into 27 folds. The proposed framework is implemented on the independent and combined feature sets in SCOP datasets. The experimental results show that extracted feature vectors in the first stage could improve the performance of DELM in extracting new useful features in second stage.

  16. VHDL implementation of feature-extraction algorithm for the PANDA electromagnetic calorimeter

    NASA Astrophysics Data System (ADS)

    Guliyev, E.; Kavatsyuk, M.; Lemmens, P. J. J.; Tambave, G.; Löhner, H.; Panda Collaboration

    2012-02-01

    A simple, efficient, and robust feature-extraction algorithm, developed for the digital front-end electronics of the electromagnetic calorimeter of the PANDA spectrometer at FAIR, Darmstadt, is implemented in VHDL for a commercial 16 bit 100 MHz sampling ADC. The source-code is available as an open-source project and is adaptable for other projects and sampling ADCs. Best performance with different types of signal sources can be achieved through flexible parameter selection. The on-line data-processing in FPGA enables to construct an almost dead-time free data acquisition system which is successfully evaluated as a first step towards building a complete trigger-less readout chain. Prototype setups are studied to determine the dead-time of the implemented algorithm, the rate of false triggering, timing performance, and event correlations.

  17. Improving Naive Bayes with Online Feature Selection for Quick Adaptation to Evolving Feature Usefulness

    SciTech Connect

    Pon, R K; Cardenas, A F; Buttler, D J

    2007-09-19

    The definition of what makes an article interesting varies from user to user and continually evolves even for a single user. As a result, for news recommendation systems, useless document features can not be determined a priori and all features are usually considered for interestingness classification. Consequently, the presence of currently useless features degrades classification performance [1], particularly over the initial set of news articles being classified. The initial set of document is critical for a user when considering which particular news recommendation system to adopt. To address these problems, we introduce an improved version of the naive Bayes classifier with online feature selection. We use correlation to determine the utility of each feature and take advantage of the conditional independence assumption used by naive Bayes for online feature selection and classification. The augmented naive Bayes classifier performs 28% better than the traditional naive Bayes classifier in recommending news articles from the Yahoo! RSS feeds.

  18. Adaptive remote sensing technology for feature recognition and tracking

    NASA Technical Reports Server (NTRS)

    Wilson, R. G.; Sivertson, W. E., Jr.; Bullock, G. F.

    1979-01-01

    A technology development plan designed to reduce the data load and data-management problems associated with global study and monitoring missions is described with a heavy emphasis placed on developing mission capabilities to eliminate the collection of unnecessary data. Improved data selectivity can be achieved through sensor automation correlated with the real-time needs of data users. The first phase of the plan includes the Feature Identification and Location Experiment (FILE) which is scheduled for the 1980 Shuttle flight. The FILE experiment is described with attention given to technology needs, development plan, feature recognition and classification, and cloud-snow detection/discrimination. Pointing, tracking and navigation received particular consideration, and it is concluded that this technology plan is viewed as an alternative to approaches to real-time acquisition that are based on extensive onboard format and inventory processing and reliance upon global-satellite-system navigation data.

  19. Adaptable, high recall, event extraction system with minimal configuration

    PubMed Central

    2015-01-01

    Background Biomedical event extraction has been a major focus of biomedical natural language processing (BioNLP) research since the first BioNLP shared task was held in 2009. Accordingly, a large number of event extraction systems have been developed. Most such systems, however, have been developed for specific tasks and/or incorporated task specific settings, making their application to new corpora and tasks problematic without modification of the systems themselves. There is thus a need for event extraction systems that can achieve high levels of accuracy when applied to corpora in new domains, without the need for exhaustive tuning or modification, whilst retaining competitive levels of performance. Results We have enhanced our state-of-the-art event extraction system, EventMine, to alleviate the need for task-specific tuning. Task-specific details are specified in a configuration file, while extensive task-specific parameter tuning is avoided through the integration of a weighting method, a covariate shift method, and their combination. The task-specific configuration and weighting method have been employed within the context of two different sub-tasks of BioNLP shared task 2013, i.e. Cancer Genetics (CG) and Pathway Curation (PC), removing the need to modify the system specifically for each task. With minimal task specific configuration and tuning, EventMine achieved the 1st place in the PC task, and 2nd in the CG, achieving the highest recall for both tasks. The system has been further enhanced following the shared task by incorporating the covariate shift method and entity generalisations based on the task definitions, leading to further performance improvements. Conclusions We have shown that it is possible to apply a state-of-the-art event extraction system to new tasks with high levels of performance, without having to modify the system internally. Both covariate shift and weighting methods are useful in facilitating the production of high recall systems

  20. Acousto-Optic Technology for Topographic Feature Extraction and Image Analysis.

    DTIC Science & Technology

    1981-03-01

    This report contains all findings of the acousto - optic technology study for feature extraction conducted by Deft Laboratories Inc. for the U.S. Army...topographic feature extraction and image analysis using acousto - optic (A-O) technology. A conclusion of this study was that A-O devices are potentially

  1. Improved Dictionary Formation and Search for Synthetic Aperture Radar Canonical Shape Feature Extraction

    DTIC Science & Technology

    2014-03-27

    IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Matthew P. Crosser, Captain, USAF... SYNTHETIC APERTURE RADAR CANONICAL SHAPE FEATURE EXTRACTION THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT-ENG-14-M-21 IMPROVED DICTIONARY FORMATION AND SEARCH FOR SYNTHETIC APERTURE RADAR CANONICAL

  2. Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu

    Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.

  3. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  4. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  5. Feature Extraction from Parallel/Distributed Transient CFD Solutions

    DTIC Science & Technology

    2007-11-02

    visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically make sense.

  6. Image Algebra Application to Image Measurement and Feature Extraction

    NASA Astrophysics Data System (ADS)

    Ritter, Gerhard X.; Wilson, Joseph N.; Davidson, Jennifer L.

    1989-03-01

    It has been well established that the AFATL (Air Force Armament Technical Laboratory) Image Algebra is capable of expressing all image-to-image transformations [1,2] and that it is ideally suited for parallel image transformations {3,4]. In this paper we show how the algebra can also be applied to compactly express image-to-feature transforms including such sequential image-to-feature transforms as chain coding.

  7. Cognitive, adaptive, and behavioral features in Joubert syndrome.

    PubMed

    Bulgheroni, Sara; D'Arrigo, Stefano; Signorini, Sabrina; Briguglio, Marilena; Di Sabato, Maria Lucia; Casarano, Manuela; Mancini, Francesca; Romani, Marta; Alfieri, Paolo; Battini, Roberta; Zoppello, Marina; Tortorella, Gaetano; Bertini, Enrico; Leuzzi, Vincenzo; Valente, Enza Maria; Riva, Daria

    2016-12-01

    Joubert syndrome (JS) is a recessive neurodevelopmental disorder characterized by a distinctive cerebellar and brainstem malformation recognizable on brain imaging, the so-called molar tooth sign. The full spectrum of cognitive and behavioral phenotypes typical of JS is still far from being elucidated. The aim of this multicentric study was to define the clinical phenotype and neurobehavioral features of a large cohort of subjects with a neuroradiologically confirmed diagnosis of JS. Fifty-four patients aged 10 months to 29 years were enrolled. Each patient underwent a neurological evaluation as well as psychiatric and neuropsychological assessments. Global cognitive functioning was remarkably variable with Full IQ/General Quotient ranging from 32 to 129. Communication skills appeared relatively preserved with respect to both Daily Living and Socialization abilities. The motor domain was the area of greatest vulnerability, with a negative impact on personal care, social, and academic skills. Most children did not show maladaptive behaviors consistent with a psychiatric diagnosis but approximately 40% of them presented emotional and behavioral problems. We conclude that intellectual disability remains a hallmark but cannot be considered a mandatory diagnostic criterion of JS. Despite the high variability in the phenotypic spectrum and the extent of multiorgan involvement, nearly one quarter of JS patients had a favorable long-term outcome with borderline cognitive deficit or even normal cognition. Most of JS population also showed relatively preserved communication skills and overall discrete behavioral functioning in everyday life, independently from the presence and/or level of intellectual disability. © 2016 Wiley Periodicals, Inc.

  8. An adaptive method with integration of multi-wavelet based features for unsupervised classification of SAR images

    NASA Astrophysics Data System (ADS)

    Chamundeeswari, V. V.; Singh, D.; Singh, K.

    2007-12-01

    In single band and single polarized synthetic aperture radar (SAR) images, the information is limited to intensity and texture only and it is very difficult to interpret such SAR images without any a priori information. For unsupervised classification of SAR images, M-band wavelet decomposition is performed on the SAR image and sub-band selection on the basis of energy levels is applied to improve the classification results since sparse representation of sub-bands degrades the performance of classification. Then, textural features are obtained from selected sub-bands and integrated with intensity features. An adaptive neuro-fuzzy algorithm is used to improve computational efficiency by extracting significant features. K-means classification is performed on the extracted features and land features are labeled. This classification algorithm involves user defined parameters. To remove the user dependency and to obtain maximum achievable classification accuracy, an algorithm is developed in this paper for classification accuracy in terms of the parameters involved in the segmentation process. This is very helpful to develop the automated land-cover monitoring system with SAR, where optimized parameters are to be identified only once and these parameters can be applied to SAR imagery of the same scene obtained year after year. A single band, single polarized SAR image is classified into water, urban and vegetation areas using this method and overall classification accuracy is obtained in the range of 85.92%-93.70% by comparing with ground truth data.

  9. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  10. Adaptive feature selection using v-shaped binary particle swarm optimization

    PubMed Central

    Dong, Hongbin; Zhou, Xiurong

    2017-01-01

    Feature selection is an important preprocessing method in machine learning and data mining. This process can be used not only to reduce the amount of data to be analyzed but also to build models with stronger interpretability based on fewer features. Traditional feature selection methods evaluate the dependency and redundancy of features separately, which leads to a lack of measurement of their combined effect. Moreover, a greedy search considers only the optimization of the current round and thus cannot be a global search. To evaluate the combined effect of different subsets in the entire feature space, an adaptive feature selection method based on V-shaped binary particle swarm optimization is proposed. In this method, the fitness function is constructed using the correlation information entropy. Feature subsets are regarded as individuals in a population, and the feature space is searched using V-shaped binary particle swarm optimization. The above procedure overcomes the hard constraint on the number of features, enables the combined evaluation of each subset as a whole, and improves the search ability of conventional binary particle swarm optimization. The proposed algorithm is an adaptive method with respect to the number of feature subsets. The experimental results show the advantages of optimizing the feature subsets using the V-shaped transfer function and confirm the effectiveness and efficiency of the feature subsets obtained under different classifiers. PMID:28358850

  11. Application of fuzzy logic to feature extraction from images of agricultural material

    NASA Astrophysics Data System (ADS)

    Thompson, Bruce T.

    1999-11-01

    Imaging technology has extended itself from performing gauging on machined parts, to verifying labeling on consumer products, to quality inspection of a variety of man-made and natural materials. Much of this has been made possible by faster computers and algorithms used to extract useful information from the image. In the application of agricultural material, specifically tobacco leaves, the tremendous amount of natural variability in color and texture creates new challenges to image feature extraction. As with many imaging applications, the problem can be expressed as `I see it in the image, how can I get the computer to recognize it?' In this application, the goal is to measure the amount of thick stem pieces in an image of tobacco leaves. By backlighting the leaf, the stems appear dark on a lighter background. The difference in lightness of leaf versus darkness of stem is dependent on the orientation of the leaf and the amount of folding. Because of this, any image thresholding approach must be adaptive. Another factor that allows us to identify the stem from the leaf is shape. The stem is long and narrow, while dark folded leaf is larger and more oblate. These criteria under the image collection limitations create a good application for fuzzy logic. Several generalized classification algorithms, such as fuzzy c-means and fuzzy learning vector quantization, are evaluated and compared. In addition, fuzzy thresholding based on image shape and compactness are applied to this application.

  12. Semantic Feature Extraction for Brain CT Image Clustering Using Nonnegative Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Liu, Weixiang; Peng, Fei; Feng, Shu; You, Jiangsheng; Chen, Ziqiang; Wu, Jian; Yuan, Kehong; Ye, Datian

    Brain computed tomography (CT) image based computer-aided diagnosis (CAD) system is helpful for clinical diagnosis and treatment. However it is challenging to extract significant features for analysis because CT images come from different people and CT operator. In this study, we apply nonnegative matrix factorization to extract both appearance and histogram based semantic features of images for clustering analysis as test. Our experimental results on normal and tumor CT images demonstrate that NMF can discover local features for both visual content and histogram based semantics, and the clustering results show that the semantic image features are superior to low level visual features.

  13. Optimized shift-invariant wavelet packet feature extraction for electroencephalographic evoked responses.

    PubMed

    Harris, Arief R; Schwerdtfeger, Karsten; Strauss, Daniel J

    2008-01-01

    Local discriminant bases (LDB) have a major disadvantage in their representation which is sensitive to signal translations. The discriminant features will be not consistent when the same but shifted signal is applied. Thus, to overcome this problem, an approximate shift-invariant features extraction based on local discriminant bases is introduced. This technique is based on approximate shift-invariant wavelet packed decomposition which integrate a cost function for decimation decision in each sub-band expansion. This technique gives a consistent best tree selection both in top-down and bottom-up search method. It also provides a consistent wavelet shape in a shape-adapted wavelet method to determine the best wavelet library for a particular signal. This method has an advantage especially in electroencephalographic (EEG) measurement in which there is an inter-individual shift in time for the signals. An application of this method is provided by the discrimination between signals with transcranial magnetic stimulation (TMS) and acoustic-somatosensory stimulation (ASS).

  14. Stent enhancement in digital x-ray fluoroscopy using an adaptive feature enhancement filter

    NASA Astrophysics Data System (ADS)

    Jiang, Yuhao; Zachary, Josey

    2016-03-01

    Fluoroscopic images belong to the classes of low contrast and high noise. Simply lowering radiation dose will render the images unreadable. Feature enhancement filters can reduce patient dose by acquiring images at low dose settings and then digitally restoring them to the original quality. In this study, a stent contrast enhancement filter is developed to selectively improve the contrast of stent contour without dramatically boosting the image noise including quantum noise and clinical background noise. Gabor directional filter banks are implemented to detect the edges and orientations of the stent. A high orientation resolution of 9° is used. To optimize the use of the information obtained from Gabor filters, a computerized Monte Carlo simulation followed by ROC study is used to find the best nonlinear operator. The next stage of filtering process is to extract symmetrical parts in the stent. The global and local symmetry measures are used. The information gathered from previous two filter stages are used to generate a stent contour map. The contour map is then scaled and added back to the original image to get a contrast enhanced stent image. We also apply a spatio-temporal channelized Hotelling observer model and other numerical measures to characterize the response of the filters and contour map to optimize the selections of parameters for image quality. The results are compared to those filtered by an adaptive unsharp masking filter previously developed. It is shown that stent enhancement filter can effectively improve the stent detection and differentiation in the interventional fluoroscopy.

  15. Association Rule Based Feature Extraction for Character Recognition

    NASA Astrophysics Data System (ADS)

    Dua, Sumeet; Singh, Harpreet

    Association rules that represent isomorphisms among data have gained importance in exploratory data analysis because they can find inherent, implicit, and interesting relationships among data. They are also commonly used in data mining to extract the conditions among attribute values that occur together frequently in a dataset [1]. These rules have wide range of applications, namely in the financial and retail sectors of marketing, sales, and medicine.

  16. (Almost) Automatic Semantic Feature Extraction from Technical Text

    DTIC Science & Technology

    1994-01-01

    independent manner. The next section will describe an existing NLP system ( KUDZU ) which has been developed at Mississippi State Uni- versity...EXISTING KUDZU SYSTEM The research described in this paper is part of a larger on- going project called the KUDZU (Knowledge Under Devel- opment from...Zero Understanding) project. This project is aimed at exploring the automation of extraction of infor- mation from technical texts. The KUDZU system

  17. Fault feature extraction of gearbox by using overcomplete rational dilation discrete wavelet transform on signals measured from vibration sensors

    NASA Astrophysics Data System (ADS)

    Chen, Binqiang; Zhang, Zhousuo; Sun, Chuang; Li, Bing; Zi, Yanyang; He, Zhengjia

    2012-11-01

    Gearbox fault diagnosis is very important for preventing catastrophic accidents. Vibration signals of gearboxes measured by sensors are useful and dependable as they carry key information related to the mechanical faults in gearboxes. Effective signal processing techniques are in necessary demands to extract the fault features contained in the collected gearbox vibration signals. Overcomplete rational dilation discrete wavelet transform (ORDWT) enjoys attractive properties such as better shift-invariance, adjustable time-frequency distributions and flexible wavelet atoms of tunable oscillation in comparison with classical dyadic wavelet transform (DWT). Due to these advantages, ORDWT is presented as a versatile tool that can be adapted to analysis of gearbox fault features of different types, especially in analyzing the non-stationary and transient characteristics of the signals. Aiming to extract the various types of fault features confronted in gearbox fault diagnosis, a fault feature extraction technique based on ORDWT is proposed in this paper. In the routine of the proposed technique, ORDWT is used as the pre-processing decomposition tool, and a corresponding post-processing method is combined with ORDWT to extract the fault feature of a specific type. For extracting periodical impulses in the signal, an impulse matching algorithm is presented. In this algorithm, ORDWT bases of varied time-frequency distributions and varied oscillatory natures are adopted, moreover an improved signal impulsiveness measure derived from kurtosis is developed for choosing optimal ORDWT bases that perfectly match the hidden periodical impulses. For demodulation purpose, an improved instantaneous time-frequency spectrum (ITFS), based on the combination of ORDWT and Hilbert transform, is presented. For signal denoising applications, ORDWT is enhanced by neighboring coefficient shrinkage strategy as well as subband selection step to reveal the buried transient vibration contents. The

  18. Feature Extraction of High-Dimensional Structures for Exploratory Analytics

    DTIC Science & Technology

    2013-04-01

    development of a method to gain insight into HDD, particularly in the application of an analytic strategy to terrorist data. 15. SUBJECT TERMS...geodesic distance 4 (8); (3) the COIL-20 dataset; (4) word-features dataset; and (5) a Netflix dataset.* Although the manifold learners are

  19. Group Component Analysis for Multiblock Data: Common and Individual Feature Extraction.

    PubMed

    Zhou, Guoxu; Cichocki, Andrzej; Zhang, Yu; Mandic, Danilo P

    2016-11-01

    Real-world data are often acquired as a collection of matrices rather than as a single matrix. Such multiblock data are naturally linked and typically share some common features while at the same time exhibiting their own individual features, reflecting the underlying data generation mechanisms. To exploit the linked nature of data, we propose a new framework for common and individual feature extraction (CIFE) which identifies and separates the common and individual features from the multiblock data. Two efficient algorithms termed common orthogonal basis extraction (COBE) are proposed to extract common basis is shared by all data, independent on whether the number of common components is known beforehand. Feature extraction is then performed on the common and individual subspaces separately, by incorporating dimensionality reduction and blind source separation techniques. Comprehensive experimental results on both the synthetic and real-world data demonstrate significant advantages of the proposed CIFE method in comparison with the state-of-the-art.

  20. Arranging the order of feature-extraction operations in pattern classification

    NASA Astrophysics Data System (ADS)

    Hwang, Shu-Yuen; Tsai, Ronlon

    1992-02-01

    The typical process of statistical pattern classification is to first extract features from an object presented in an input image, then using the Bayesian decision rule, to compute the a posteriori probabilities that the object will be recognized by the system. When recursive Bayesian decision rule is used in this process, the phase of feature-extraction can be mixed with the phase of classification such that the a posteriori probabilities after adding each feature can be computed one by one. There are two reasons for thinking about which feature should be extracted first and which should go next. First, feature extraction is usually very time consuming. The extraction of any global feature from an object at least needs time in the order of the size of the object. Second, it is very often that we do not need to use all features in order to obtain a final classification; the a posteriori probabilities of some models will become zero after only a few features have been used. The problem is how to arrange the order of feature-extraction operations such that we can use a minimum order of operations to do the right classification. This paper presents two information-theoretical based heuristics for predicting the performance of feature-extraction operations. The prediction is then used to arrange the order of these operations. The first heuristic is the power of discrimination of each operation. The second heuristic is the power of justification of each operation and is used in the special case that some points in the feature space do not belong to any model. Both heuristics are computed from the distributions of models. The experimental result and its comparison to our previous works will be presented.

  1. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  2. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  3. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  4. TIN based image segmentation for man-made feature extraction

    NASA Astrophysics Data System (ADS)

    Jiang, Wanshou; Xie, Junfeng

    2005-10-01

    Traditionally, the splitting and merging algorithm of image segmentation is based on quad tree data structure, which is not convenient to express the topography of regions, the line segments and other information. A new framework is discussed in this paper. It is "TIN based image segmentation and grouping", in which edge information and region information are integrated directly. Firstly, the constrained triangle mesh is constructed with edge segments extracted by EDISON or other algorithm. And then, region growing based on triangles is processed to generate a coarse segmentation. At last, the regions are combined further with perceptual organization rule.

  5. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  6. Semantic control of feature extraction from natural scenes.

    PubMed

    Neri, Peter

    2014-02-05

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect.

  7. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  8. Feature Selection for Natural Language Call Routing Based on Self-Adaptive Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Koromyslova, A.; Semenkina, M.; Sergienko, R.

    2017-02-01

    The text classification problem for natural language call routing was considered in the paper. Seven different term weighting methods were applied. As dimensionality reduction methods, the feature selection based on self-adaptive GA is considered. k-NN, linear SVM and ANN were used as classification algorithms. The tasks of the research are the following: perform research of text classification for natural language call routing with different term weighting methods and classification algorithms and investigate the feature selection method based on self-adaptive GA. The numerical results showed that the most effective term weighting is TRR. The most effective classification algorithm is ANN. Feature selection with self-adaptive GA provides improvement of classification effectiveness and significant dimensionality reduction with all term weighting methods and with all classification algorithms.

  9. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    SciTech Connect

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  10. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data

    PubMed Central

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-01-01

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew’s Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions. PMID:26861308

  11. A Novel Feature Extraction Method with Feature Selection to Identify Golgi-Resident Protein Types from Imbalanced Data.

    PubMed

    Yang, Runtao; Zhang, Chengjin; Gao, Rui; Zhang, Lina

    2016-02-06

    The Golgi Apparatus (GA) is a major collection and dispatch station for numerous proteins destined for secretion, plasma membranes and lysosomes. The dysfunction of GA proteins can result in neurodegenerative diseases. Therefore, accurate identification of protein subGolgi localizations may assist in drug development and understanding the mechanisms of the GA involved in various cellular processes. In this paper, a new computational method is proposed for identifying cis-Golgi proteins from trans-Golgi proteins. Based on the concept of Common Spatial Patterns (CSP), a novel feature extraction technique is developed to extract evolutionary information from protein sequences. To deal with the imbalanced benchmark dataset, the Synthetic Minority Over-sampling Technique (SMOTE) is adopted. A feature selection method called Random Forest-Recursive Feature Elimination (RF-RFE) is employed to search the optimal features from the CSP based features and g-gap dipeptide composition. Based on the optimal features, a Random Forest (RF) module is used to distinguish cis-Golgi proteins from trans-Golgi proteins. Through the jackknife cross-validation, the proposed method achieves a promising performance with a sensitivity of 0.889, a specificity of 0.880, an accuracy of 0.885, and a Matthew's Correlation Coefficient (MCC) of 0.765, which remarkably outperforms previous methods. Moreover, when tested on a common independent dataset, our method also achieves a significantly improved performance. These results highlight the promising performance of the proposed method to identify Golgi-resident protein types. Furthermore, the CSP based feature extraction method may provide guidelines for protein function predictions.

  12. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  13. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    PubMed Central

    Wei, Gao; Zongxi, Song

    2017-01-01

    As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods. PMID:28321246

  14. Knowledge-based topographic feature extraction in medical images

    NASA Astrophysics Data System (ADS)

    Qian, JianZhong; Khair, Mohammad M.

    1995-08-01

    Diagnostic medical imaging often contains variations of patient anatomies, camera mispositioning, or other imperfect imaging condiitons. These variations contribute to uncertainty about shapes and boundaries of objects in images. As the results sometimes image features, such as traditional edges, may not be identified reliably and completely. We describe a knowledge based system that is able to reason about such uncertainties and use partial and locally ambiguous information to infer about shapes and lcoation of objects in an image. The system uses directional topographic features (DTFS), such as ridges and valleys, labeled from the underlying intensity surface to correlate to the intrinsic anatomical information. By using domain specific knowledge, the reasoning system can deduce significant anatomical landmarks based upon these DTFS, and can cope with uncertainties and fill in missing information. A succession of levels of representation for visual information and an active process of uncertain reasoning about this visual information are employed to realiably achieve the goal of image analysis. These landmarks can then be used in localization of anatomy of interest, image registration, or other clinical processing. The successful application of this system to a large set of planar cardiac images of nuclear medicine studies has demonstrated its efficiency and accuracy.

  15. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  16. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  17. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  18. Recent development of feature extraction and classification multispectral/hyperspectral images: a systematic literature review

    NASA Astrophysics Data System (ADS)

    Setiyoko, A.; Dharma, I. G. W. S.; Haryanto, T.

    2017-01-01

    Multispectral data and hyperspectral data acquired from satellite sensor have the ability in detecting various objects on the earth ranging from low scale to high scale modeling. These data are increasingly being used to produce geospatial information for rapid analysis by running feature extraction or classification process. Applying the most suited model for this data mining is still challenging because there are issues regarding accuracy and computational cost. This research aim is to develop a better understanding regarding object feature extraction and classification applied for satellite image by systematically reviewing related recent research projects. A method used in this research is based on PRISMA statement. After deriving important points from trusted sources, pixel based and texture-based feature extraction techniques are promising technique to be analyzed more in recent development of feature extraction and classification.

  19. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  20. Autonomous Time-Frequency Cropping and Feature-Extraction Algorithms for Classification of LPI Radar Modulations

    DTIC Science & Technology

    2006-06-01

    INTERCEPT ( LPI ) SIGNAL MODULATIONS In this chapter nine LPI radar modulations are described: FMCW , Frank, P1, P2, P3, P4, T1(n), T2(n). Although not a LPI ...FREQUENCY CROPPING AND FEATURE-EXTRACTION ALGORITHMS FOR CLASSIFICATION OF LPI RADAR MODULATIONS by Eric R. Zilberman June 2006 Thesis...and Feature- Extraction Algorithms for Classification of LPI Radar Modulations 6. AUTHOR Eric R. Zilberman 5. FUNDING NUMBERS 7. PERFORMING

  1. Spatio-temporal feature-extraction techniques for isolated gesture recognition in Arabic sign language.

    PubMed

    Shanableh, Tamer; Assaleh, Khaled; Al-Rousan, M

    2007-06-01

    This paper presents various spatio-temporal feature-extraction techniques with applications to online and offline recognitions of isolated Arabic Sign Language gestures. The temporal features of a video-based gesture are extracted through forward, backward, and bidirectional predictions. The prediction errors are thresholded and accumulated into one image that represents the motion of the sequence. The motion representation is then followed by spatial-domain feature extractions. As such, the temporal dependencies are eliminated and the whole video sequence is represented by a few coefficients. The linear separability of the extracted features is assessed, and its suitability for both parametric and nonparametric classification techniques is elaborated upon. The proposed feature-extraction scheme was complemented by simple classification techniques, namely, K nearest neighbor (KNN) and Bayesian, i.e., likelihood ratio, classifiers. Experimental results showed classification performance ranging from 97% to 100% recognition rates. To validate our proposed technique, we have conducted a series of experiments using the classical way of classifying data with temporal dependencies, namely, hidden Markov models (HMMs). Experimental results revealed that the proposed feature-extraction scheme combined with simple KNN or Bayesian classification yields comparable results to the classical HMM-based scheme. Moreover, since the proposed scheme compresses the motion information of an image sequence into a single image, it allows for using simple classification techniques where the temporal dimension is eliminated. This is actually advantageous for both computational and storage requirements of the classifier.

  2. One Feature of Adaptive Lesson Study in Thailand: Designing a Learning Unit

    ERIC Educational Resources Information Center

    Inprasitha, Maitree

    2011-01-01

    In Thailand, the Center for Research in Mathematics Education (CRME) has been implementing Japanese Lesson Study (LS) since 2002. An adaptive feature of this implementation was the incorporation of four phases of the Open Approach as a teaching approach within the three steps of the LS process. Four phases of this open approach are: 1) Posing…

  3. Spectral Regression Based Fault Feature Extraction for Bearing Accelerometer Sensor Signals

    PubMed Central

    Xia, Zhanguo; Xia, Shixiong; Wan, Ling; Cai, Shiyu

    2012-01-01

    Bearings are not only the most important element but also a common source of failures in rotary machinery. Bearing fault prognosis technology has been receiving more and more attention recently, in particular because it plays an increasingly important role in avoiding the occurrence of accidents. Therein, fault feature extraction (FFE) of bearing accelerometer sensor signals is essential to highlight representative features of bearing conditions for machinery fault diagnosis and prognosis. This paper proposes a spectral regression (SR)-based approach for fault feature extraction from original features including time, frequency and time-frequency domain features of bearing accelerometer sensor signals. SR is a novel regression framework for efficient regularized subspace learning and feature extraction technology, and it uses the least squares method to obtain the best projection direction, rather than computing the density matrix of features, so it also has the advantage in dimensionality reduction. The effectiveness of the SR-based method is validated experimentally by applying the acquired vibration signals data to bearings. The experimental results indicate that SR can reduce the computation cost and preserve more structure information about different bearing faults and severities, and it is demonstrated that the proposed feature extraction scheme has an advantage over other similar approaches. PMID:23202017

  4. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  5. A spatial division clustering method and low dimensional feature extraction technique based indoor positioning system.

    PubMed

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-22

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect.

  6. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  7. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  8. Feature Extraction on Brain Computer Interfaces using Discrete Dyadic Wavelet Transform: Preliminary Results

    NASA Astrophysics Data System (ADS)

    Gareis, I.; Gentiletti, G.; Acevedo, R.; Rufiner, L.

    2011-09-01

    The purpose of this work is to evaluate different feature extraction alternatives to detect the event related evoked potential signal on brain computer interfaces, trying to minimize the time employed and the classification error, in terms of sensibility and specificity of the method, looking for alternatives to coherent averaging. In this context the results obtained performing the feature extraction using discrete dyadic wavelet transform using different mother wavelets are presented. For the classification a single layer perceptron was used. The results obtained with and without the wavelet decomposition were compared; showing an improvement on the classification rate, the specificity and the sensibility for the feature vectors obtained using some mother wavelets.

  9. Extraction of Lesion-Partitioned Features and Retrieval of Contrast-Enhanced Liver Images

    PubMed Central

    Yu, Mei; Feng, Qianjin; Yang, Wei; Gao, Yang; Chen, Wufan

    2012-01-01

    The most critical step in grayscale medical image retrieval systems is feature extraction. Understanding the interrelatedness between the characteristics of lesion images and corresponding imaging features is crucial for image training, as well as for features extraction. A feature-extraction algorithm is developed based on different imaging properties of lesions and on the discrepancy in density between the lesions and their surrounding normal liver tissues in triple-phase contrast-enhanced computed tomographic (CT) scans. The algorithm includes mainly two processes: (1) distance transformation, which is used to divide the lesion into distinct regions and represents the spatial structure distribution and (2) representation using bag of visual words (BoW) based on regions. The evaluation of this system based on the proposed feature extraction algorithm shows excellent retrieval results for three types of liver lesions visible on triple-phase scans CT images. The results of the proposed feature extraction algorithm show that although single-phase scans achieve the average precision of 81.9%, 80.8%, and 70.2%, dual- and triple-phase scans achieve 86.3% and 88.0%. PMID:22988480

  10. Classification of mammographic masses: influence of regions used for feature extraction on the classification performance

    NASA Astrophysics Data System (ADS)

    Wagner, Florian; Wittenberg, Thomas; Elter, Matthias

    2010-03-01

    Computer-assisted diagnosis (CADx) for the characterization of mammographic masses as benign or malignant has a very high potential to help radiologists during the critical process of diagnostic decision making. By default, the characterization of mammographic masses is performed by extracting features from a region of interest (ROI) depicting the mass. To investigate the influence of the region on the classification performance, textural, morphological, frequency- as well as moment-based features are calculated in subregions of the ROI, which has been delineated manually by an expert. The investigated subregions are (a) the semi-automatically segmented area which includes only the core of the mass, (b) the outer border region of the mass, and (c) the combination of the outer and the inner border region, referred to as mass margin. To extract the border region and the margin of a mass an extended version of the rubber band straightening transform (RBST) was developed. Furthermore, the effectiveness of the features extracted from the RBST transformed border region and mass margin is compared to the effectiveness of the same features extracted from the untransformed regions. After the feature extraction process a preferably optimal feature subset is selected for each feature extractor. Classification is done using a k-NN classifier. The classification performance was evaluated using the area Az under the receiver operating characteristic curve. A publicly available mammography database was used as data set. Results showed that the manually drawn ROI lead to superior classification performances for the morphological feature extractors and that the transformed outer border region and the mass margin are not suitable for moment-based features but yield to promising results for textural and frequency-based features. Beyond that the mass margin, which combines the inner and the outer border region, leads to better classification performances compared to the outer border

  11. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  12. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  13. Uncertainty analysis of quantitative imaging features extracted from contrast-enhanced CT in lung tumors

    PubMed Central

    Yang, Jinzhong; Zhang, Lifei; Fave, Xenia J.; Fried, David V.; Stingo, Francesco C.; Ng, Chaan S.; Court, Laurence E.

    2016-01-01

    Purpose To assess the uncertainty of quantitative imaging features extracted from contrast-enhanced computed tomography (CT) scans of lung cancer patients in terms of the dependency on the time after contrast injection and the feature reproducibility between scans. Methods Eight patients underwent contrast-enhanced CT scans of lung tumors on two sessions 2–7 days apart. Each session included 6 CT scans of the same anatomy taken every 15 seconds, starting 50 seconds after contrast injection. Image features based on intensity histogram, co-occurrence matrix, neighborhood gray-tone difference matrix, run-length matrix, and geometric shape were extracted from the tumor for each scan. Spearman’s correlation was used to examine the dependency of features on the time after contrast injection, with values over 0.50 considered time-dependent. Concordance correlation coefficients were calculated to examine the reproducibility of each feature between times of scans after contrast injection and between scanning sessions, with values greater than 0.90 considered reproducible. Results The features were found to have little dependency on the time between the contrast injection and the CT scan. Most features were reproducible between times of scans after contrast injection and between scanning sessions. Some features were more reproducible when they were extracted from a CT scan performed at a longer time after contrast injection. Conclusion The quantitative imaging features tested here are mostly reproducible and show little dependency on the time after contrast injection. PMID:26745258

  14. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  15. Design and adaptation of a novel supercritical extraction facility for operation in a glove box for recovery of radioactive elements

    SciTech Connect

    Kumar, V. Suresh; Kumar, R.; Sivaraman, N.; Ravisankar, G.; Vasudeva Rao, P. R.

    2010-09-15

    The design and development of a novel supercritical extraction experimental facility adapted for safe operation in a glove box for the recovery of radioactive elements from waste is described. The apparatus incorporates a high pressure extraction vessel, reciprocating pumps for delivering supercritical fluid and reagent, a back pressure regulator, and a collection chamber. All these components of the system have been specially designed for glove box adaptation and made modular to facilitate their replacement. Confinement of these materials must be ensured in a glove box to protect the operator and prevent contamination to the work area. Since handling of radioactive materials under high pressure (30 MPa) and temperature (up to 333 K) is involved in this process, the apparatus needs elaborate safety features in the design of the equipment, as well as modification of a standard glove box to accommodate the system. As a special safety feature to contain accidental leakage of carbon dioxide from the extraction vessel, a safety vessel has been specially designed and placed inside the glove box. The extraction vessel was enclosed in the safety vessel. The safety vessel was also incorporated with pressure sensing and controlling device.

  16. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  17. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction.

    PubMed

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-06-29

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring.

  18. Feature extraction from terahertz pulses for classification of RNA data via support vector machines

    NASA Astrophysics Data System (ADS)

    Yin, Xiaoxia; Ng, Brian W.-H.; Fischer, Bernd; Ferguson, Bradley; Mickan, Samuel P.; Abbott, Derek

    2006-12-01

    This study investigates binary and multiple classes of classification via support vector machines (SVMs). A couple of groups of two dimensional features are extracted via frequency orientation components, which result in the effective classification of Terahertz (T-ray) pulses for discrimination of RNA data and various powder samples. For each classification task, a pair of extracted feature vectors from the terahertz signals corresponding to each class is viewed as two coordinates and plotted in the same coordinate system. The current classification method extracts specific features from the Fourier spectrum, without applying an extra feature extractor. This method shows that SVMs can employ conventional feature extraction methods for a T-ray classification task. Moreover, we discuss the challenges faced by this method. A pairwise classification method is applied for the multi-class classification of powder samples. Plots of learning vectors assist in understanding the classification task, which exhibit improved clustering, clear learning margins, and least support vectors. This paper highlights the ability to use a small number of features (2D features) for classification via analyzing the frequency spectrum, which greatly reduces the computation complexity in achieving the preferred classification performance.

  19. Image segmentation using globally optimal growth in three dimensions with an adaptive feature set

    NASA Astrophysics Data System (ADS)

    Taylor, David C.; Barrett, William A.

    1994-09-01

    A globally optimal region growing algorithm for 3D segmentation of anatomical objects is developed. The notion of simple 3D connected component labelling is extended to enable the combination of arbitrary features in the segmentation process. This algorithm uses a hybrid octree-btree structure to segment an object of interest in an ordered fashion. This tree structure overcomes the computational complexity of global optimality in three dimensions. The segmentation process is controlled by a set of active features, which work in concert to extract the object of interest. The cost function used to enforce the order is based on the combination of active features. The characteristics of the data throughout the volume dynamically influences which features are active. A foundation for applying user interaction with the object directly to the feature set is established. The result is a system which analyzes user input and neighborhood data and optimizes the tools used in the segmentation process accordingly.

  20. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  1. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  2. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  3. A Novel Feature Selection Strategy for Enhanced Biomedical Event Extraction Using the Turku System

    PubMed Central

    Xia, Jingbo; Fang, Alex Chengyu; Zhang, Xing

    2014-01-01

    Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES) is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE) algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion. PMID:24800214

  4. A novel feature selection strategy for enhanced biomedical event extraction using the Turku system.

    PubMed

    Xia, Jingbo; Fang, Alex Chengyu; Zhang, Xing

    2014-01-01

    Feature selection is of paramount importance for text-mining classifiers with high-dimensional features. The Turku Event Extraction System (TEES) is the best performing tool in the GENIA BioNLP 2009/2011 shared tasks, which relies heavily on high-dimensional features. This paper describes research which, based on an implementation of an accumulated effect evaluation (AEE) algorithm applying the greedy search strategy, analyses the contribution of every single feature class in TEES with a view to identify important features and modify the feature set accordingly. With an updated feature set, a new system is acquired with enhanced performance which achieves an increased F-score of 53.27% up from 51.21% for Task 1 under strict evaluation criteria and 57.24% according to the approximate span and recursive criterion.

  5. Adaptive semisupervised feature selection without graph construction for very-high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Xi; Qi, Jinzi; Chen, Yushi; Hua, Lizhong; Shao, Guofan

    2016-04-01

    Semisupervised feature selection methods can improve classification performance and enhance model comprehensibility with few labeled objects. However, most of the existing methods require graph construction beforehand, and the resulting heavy computational cost may bring about the failure to accurately capture the local geometry of data. To overcome the problem, adaptive semisupervised feature selection (ASFS) is proposed. In ASFS, the goodness of each feature is measured by linear objective functions based on loss functions and probability distribution matrices. By alternatively optimizing model parameters and automatically adjusting the probabilities of boundary objects, ASFS can measure the genuine characteristics of the data and then rank and select features. The experimental results attest to the effectiveness and practicality of the method in comparison with the latest and state-of-the-art methods on a Worldview II image and a Quickbird II image.

  6. [Lithology feature extraction of CASI hyperspectral data based on fractal signal algorithm].

    PubMed

    Tang, Chao; Chen, Jian-Ping; Cui, Jing; Wen, Bo-Tao

    2014-05-01

    Hyperspectral data is characterized by combination of image and spectrum and large data volume dimension reduction is the main research direction. Band selection and feature extraction is the primary method used for this objective. In the present article, the authors tested methods applied for the lithology feature extraction from hyperspectral data. Based on the self-similarity of hyperspectral data, the authors explored the application of fractal algorithm to lithology feature extraction from CASI hyperspectral data. The "carpet method" was corrected and then applied to calculate the fractal value of every pixel in the hyperspectral data. The results show that fractal information highlights the exposed bedrock lithology better than the original hyperspectral data The fractal signal and characterized scale are influenced by the spectral curve shape, the initial scale selection and iteration step. At present, research on the fractal signal of spectral curve is rare, implying the necessity of further quantitative analysis and investigation of its physical implications.

  7. Feature extraction of the first difference of EMG time series for EMG pattern recognition.

    PubMed

    Phinyomark, Angkoon; Quaine, Franck; Charbonnier, Sylvie; Serviere, Christine; Tarpin-Bernard, Franck; Laurillau, Yann

    2014-11-01

    This paper demonstrates the utility of a differencing technique to transform surface EMG signals measured during both static and dynamic contractions such that they become more stationary. The technique was evaluated by three stationarity tests consisting of the variation of two statistical properties, i.e., mean and standard deviation, and the reverse arrangements test. As a result of the proposed technique, the first difference of EMG time series became more stationary compared to the original measured signal. Based on this finding, the performance of time-domain features extracted from raw and transformed EMG was investigated via an EMG classification problem (i.e., eight dynamic motions and four EMG channels) on data from 18 subjects. The results show that the classification accuracies of all features extracted from the transformed signals were higher than features extracted from the original signals for six different classifiers including quadratic discriminant analysis. On average, the proposed differencing technique improved classification accuracies by 2-8%.

  8. Focal-plane CMOS wavelet feature extraction for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Olyaei, Ashkan; Genov, Roman

    2005-09-01

    Kernel-based pattern recognition paradigms such as support vector machines (SVM) require computationally intensive feature extraction methods for high-performance real-time object detection in video. The CMOS sensory parallel processor architecture presented here computes delta-sigma (ΔΣ)-modulated Haar wavelet transform on the focal plane in real time. The active pixel array is integrated with a bank of column-parallel first-order incremental oversampling analog-to-digital converters (ADCs). Each ADC performs distributed spatial focal-plane sampling and concurrent weighted average quantization. The architecture is benchmarked in SVM face detection on the MIT CBCL data set. At 90% detection rate, first-level Haar wavelet feature extraction yields a 7.9% reduction in the number of false positives when compared to classification with no feature extraction. The architecture yields 1.4 GMACS simulated computational throughput at SVGA imager resolution at 8-bit output depth.

  9. Sparse and low-rank feature extraction for the classification of target's tracking capability

    NASA Astrophysics Data System (ADS)

    Rasti, Behnood; Gudmundsson, Karl S.

    2016-09-01

    A feature extraction-based classification method is proposed in this paper for verifying the capability of human's neck in target tracking. Here, the target moves in predefined trajectory patterns in three difficulty levels. Dataset used for each pattern is obtained from two groups of people, one with whiplash associated disorder (WAD) and asymptomatic group, who behave in both sincere and feign manner. The aim is to verify the WAD group from asymptomatic one and also to discriminate the sincere behavior from the feigned one. Sparse and low-rank feature extraction is proposed to extract the most informative feature from training samples and then each sample is classified into the group which has the highest correlation coefficient with. The classification results are improved by fusing the results of the three patterns.

  10. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  11. Real-time face detection and lip feature extraction using field-programmable gate arrays.

    PubMed

    Nguyen, Duy; Halupka, David; Aarabi, Parham; Sheikholeslami, Ali

    2006-08-01

    This paper proposes a new technique for face detection and lip feature extraction. A real-time field-programmable gate array (FPGA) implementation of the two proposed techniques is also presented. Face detection is based on a naive Bayes classifier that classifies an edge-extracted representation of an image. Using edge representation significantly reduces the model's size to only 5184 B, which is 2417 times smaller than a comparable statistical modeling technique, while achieving an 86.6% correct detection rate under various lighting conditions. Lip feature extraction uses the contrast around the lip contour to extract the height and width of the mouth, metrics that are useful for speech filtering. The proposed FPGA system occupies only 15050 logic cells, or about six times less than a current comparable FPGA face detection system.

  12. Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis.

    PubMed

    Chen, Yunhua; Liu, Weijian; Zhang, Ling; Yan, Mingyu; Zeng, Yanjun

    2015-09-01

    Due to an absence of reliable biochemical markers, the diagnosis of chronic fatigue syndrome (CFS) mainly relies on the clinical symptoms, and the experience and skill of the doctors currently. To improve objectivity and reduce work intensity, a hybrid facial feature is proposed. First, several kinds of appearance features are identified in different facial regions according to clinical observations of traditional Chinese medicine experts, including vertical striped wrinkles on the forehead, puffiness of the lower eyelid, the skin colour of the cheeks, nose and lips, and the shape of the mouth corner. Afterwards, such features are extracted and systematically combined to form a hybrid feature. We divide the face into several regions based on twelve active appearance model (AAM) feature points, and ten straight lines across them. Then, Gabor wavelet filtering, CIELab color components, threshold-based segmentation and curve fitting are applied to extract features, and Gabor features are reduced by a manifold preserving projection method. Finally, an AdaBoost based score level fusion of multi-modal features is performed after classification of each feature. Despite that the subjects involved in this trial are exclusively Chinese, the method achieves an average accuracy of 89.04% on the training set and 88.32% on the testing set based on the K-fold cross-validation. In addition, the method also possesses desirable sensitivity and specificity on CFS prediction.

  13. Real-Time Tracking Framework with Adaptive Features and Constrained Labels

    PubMed Central

    Li, Daqun; Xu, Tingfa; Chen, Shuoyang; Zhang, Jizhou; Jiang, Shenwang

    2016-01-01

    This paper proposes a novel tracking framework with adaptive features and constrained labels (AFCL) to handle illumination variation, occlusion and appearance changes caused by the variation of positions. The novel ensemble classifier, including the Forward–Backward error and the location constraint is applied, to get the precise coordinates of the promising bounding boxes. The Forward–Backward error can enhance the adaptation and accuracy of the binary features, whereas the location constraint can overcome the label noise to a certain degree. We use the combiner which can evaluate the online templates and the outputs of the classifier to accommodate the complex situation. Evaluation of the widely used tracking benchmark shows that the proposed framework can significantly improve the tracking accuracy, and thus reduce the processing time. The proposed framework has been tested and implemented on the embedded system using TMS320C6416 and Cyclone Ⅲ kernel processors. The outputs show that achievable and satisfying results can be obtained. PMID:27618052

  14. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  15. Automation of lidar-based hydrologic feature extraction workflows using GIS

    NASA Astrophysics Data System (ADS)

    Borlongan, Noel Jerome B.; de la Cruz, Roel M.; Olfindo, Nestor T.; Perez, Anjillyn Mae C.

    2016-10-01

    With the advent of LiDAR technology, higher resolution datasets become available for use in different remote sensing and GIS applications. One significant application of LiDAR datasets in the Philippines is in resource features extraction. Feature extraction using LiDAR datasets require complex and repetitive workflows which can take a lot of time for researchers through manual execution and supervision. The Development of the Philippine Hydrologic Dataset for Watersheds from LiDAR Surveys (PHD), a project under the Nationwide Detailed Resources Assessment Using LiDAR (Phil-LiDAR 2) program, created a set of scripts, the PHD Toolkit, to automate its processes and workflows necessary for hydrologic features extraction specifically Streams and Drainages, Irrigation Network, and Inland Wetlands, using LiDAR Datasets. These scripts are created in Python and can be added in the ArcGIS® environment as a toolbox. The toolkit is currently being used as an aid for the researchers in hydrologic feature extraction by simplifying the workflows, eliminating human errors when providing the inputs, and providing quick and easy-to-use tools for repetitive tasks. This paper discusses the actual implementation of different workflows developed by Phil-LiDAR 2 Project 4 in Streams, Irrigation Network and Inland Wetlands extraction.

  16. Robust Feature Extraction Using Variable Window Function in Autocorrelation Domain for Speech Recognition

    NASA Astrophysics Data System (ADS)

    Lee, Sangho; Ha, Jeonghyun; Hong, Jaekeun

    This paper presents a new feature extraction method for robust speech recognition based on the autocorrelation mel frequency cepstral coefficients (AMFCCs) and a variable window. While the AMFCC feature extraction method uses the fixed double-dynamic-range (DDR) Hamming window for higher-lag autocorrelation coefficients, which are least affected by noise, the proposed method applies a variable window, depending on the frame energy and periodicity. The performance of the proposed method is verified using an Aurora-2 task, and the results confirm a significantly improved performance under noisy conditions.

  17. Linear feature extraction from radar imagery: SBIR (Small Business Innovative Research), phase 2, option 2

    NASA Astrophysics Data System (ADS)

    Milgram, David L.; Kahn, Philip; Conner, Gary D.; Lawton, Daryl T.

    1988-12-01

    The goal of this effort is to develop and demonstrate prototype processing capabilities for a knowledge-based system to automatically extract and analyze features from Synthetic Aperture Radar (SAR) imagery. This effort constitutes Phase 2 funding through the Defense Small Business Innovative Research (SBIR) Program. Previous work examined the feasibility of and technology issues involved in the development of an automated linear feature extraction system. This final report documents this examination and the technologies involved in automating this image understanding task. In particular, it reports on a major software delivery containing an image processing algorithmic base, a perceptual structures manipulation package, a preliminary hypothesis management framework and an enhanced user interface.

  18. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  19. Automatic Extraction and Coordination of Audit Data and Features for Intrusion and Damage Assessment

    DTIC Science & Technology

    2006-03-31

    03/31/06 Final Project Report 01/01/03-03/31/06 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Automatic Extraction and Coordination of Audit Data and...Features for Intrusion and Damage Assessment 5b. GRANTNUMBER F49620-03-1-0109 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Dr. Nong Ye 5e...assessing the damage through automatic extraction and coordination of audit data and features for intrusion and damage assessment. In this project , we

  20. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Astrophysics Data System (ADS)

    Li, Jian

    1994-09-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  1. Transient signal analysis based on Levenberg-Marquardt method for fault feature extraction of rotating machines

    NASA Astrophysics Data System (ADS)

    Wang, Shibin; Cai, Gaigai; Zhu, Zhongkui; Huang, Weiguo; Zhang, Xingwu

    2015-03-01

    Localized faults in rotating machines tend to result in shocks and thus excite transient components in vibration signals. An iterative extraction method is proposed for transient signal analysis based on transient modeling and parameter identification through Levenberg-Marquardt (LM) method, and eventually for fault feature extraction. For each iteration, a double-side asymmetric transient model is firstly built based on parametric Morlet wavelet, and then the LM method is introduced to identify the parameters of the model. With the implementation of the iterative procedure, transients are extracted from vibration signals one by one, and Wigner-Ville Distribution is applied to obtain time-frequency representation with satisfactory energy concentration but without cross-term. A simulation signal is used to test the performance of the proposed method in transient extraction, and the comparison study shows that the proposed method outperforms ensemble empirical mode decomposition and spectral kurtosis in extracting transient feature. Finally, the effectiveness of the proposed method is verified by the applications in transient analysis for bearing and gear fault feature extraction.

  2. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  3. Feature Extraction for Bearing Prognostics and Health Management (PHM) - A Survey (Preprint)

    DTIC Science & Technology

    2008-05-01

    typically consists of core functions, such as, anomaly detection, fault diagnosis , prognosis, and decision-making. Sensors in the sensing module of a...used the three models for model-based bearing fault diagnosis , not explicitly feature extraction purpose. Recent direction for model-based feature...is another widely used frequency domain technique for bearing fault diagnosis . Envelope analysis consists of two steps: band-pass filtering and

  4. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  5. Feature extraction and integration underlying perceptual decision making during courtship behavior.

    PubMed

    Clemens, Jan; Ronacher, Bernhard

    2013-07-17

    Traditionally, perceptual decision making is studied in trained animals and carefully controlled tasks. Here, we sought to elucidate the stimulus features and their combination underlying a naturalistic behavior--female decision making during acoustic courtship in grasshoppers. Using behavioral data, we developed a model in which stimulus features were extracted by physiologically plausible models of sensory neurons from the time-varying stimulus. This sensory evidence was integrated over the stimulus duration and combined to predict the behavior. We show that decisions were determined by the interaction of an excitatory and a suppressive stimulus feature. The observed increase of behavioral response with stimulus intensity was the result of an increase of the excitatory feature's gain that was not controlled by an equivalent increase of the suppressive feature. Differences in how these two features were combined could explain interindividual variability. In addition, the mapping between the two stimulus features and different parameters of the song led us to re-evaluate the cues underlying acoustic communication. Our framework provided a rich and plausible explanation of behavior in terms of two stimulus cues that were extracted by models of sensory neurons and combined through excitatory-inhibitory interactions. We thus were able to link single neuron's feature selectivity and network computations with decision making in a natural task. This data-driven approach has the potential to advance our understanding of decision making in other systems and can inform the search for the neural correlates of behavior.

  6. A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search

    PubMed Central

    Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang

    2016-01-01

    The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy. PMID:27941631

  7. Computer-aided diagnosis of rheumatoid arthritis with optical tomography, Part 1: feature extraction.

    PubMed

    Montejo, Ludguier D; Jia, Jingfei; Kim, Hyun K; Netz, Uwe J; Blaschke, Sabine; Müller, Gerhard A; Hielscher, Andreas H

    2013-07-01

    This is the first part of a two-part paper on the application of computer-aided diagnosis to diffuse optical tomography (DOT). An approach for extracting heuristic features from DOT images and a method for using these features to diagnose rheumatoid arthritis (RA) are presented. Feature extraction is the focus of Part 1, while the utility of five classification algorithms is evaluated in Part 2. The framework is validated on a set of 219 DOT images of proximal interphalangeal (PIP) joints. Overall, 594 features are extracted from the absorption and scattering images of each joint. Three major findings are deduced. First, DOT images of subjects with RA are statistically different (p<0.05) from images of subjects without RA for over 90% of the features investigated. Second, DOT images of subjects with RA that do not have detectable effusion, erosion, or synovitis (as determined by MRI and ultrasound) are statistically indistinguishable from DOT images of subjects with RA that do exhibit effusion, erosion, or synovitis. Thus, this subset of subjects may be diagnosed with RA from DOT images while they would go undetected by reviews of MRI or ultrasound images. Third, scattering coefficient images yield better one-dimensional classifiers. A total of three features yield a Youden index greater than 0.8. These findings suggest that DOT may be capable of distinguishing between PIP joints that are healthy and those affected by RA with or without effusion, erosion, or synovitis.

  8. Consistent performance measurement of a system to detect masses in mammograms based on blind feature extraction

    PubMed Central

    2013-01-01

    Background Breast cancer continues to be a leading cause of cancer deaths among women, especially in Western countries. In the last two decades, many methods have been proposed to achieve a robust mammography‐based computer aided detection (CAD) system. A CAD system should provide high performance over time and in different clinical situations. I.e., the system should be adaptable to different clinical situations and should provide consistent performance. Methods We tested our system seeking a measure of the guarantee of its consistent performance. The method is based on blind feature extraction by independent component analysis (ICA) and classification by neural networks (NN) or SVM classifiers. The test mammograms were from the Digital Database for Screening Mammography (DDSM). This database was constructed collaboratively by four institutions over more than 10 years. We took advantage of this to train our system using the mammograms from each institution separately, and then testing it on the remaining mammograms. We performed another experiment to compare the results and thus obtain the measure sought. This experiment consists in to form the learning sets with all available prototypes regardless of the institution in which them were generated, obtaining in that way the overall results. Results The smallest variation from comparing the results of the testing set in each experiment (performed by training the system using the mammograms from one institution and testing with the remaining) with those of the overall result, considering the success rate for an intermediate decision maker threshold, was roughly 5%, and the largest variation was roughly 17%. But, if we considere the area under ROC curve, the smallest variation was close to 4%, and the largest variation was about a 6%. Conclusions Considering the heterogeneity in the datasets used to train and test our system in each case, we think that the variation of performance obtained when the results are

  9. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    PubMed Central

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  10. Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.

    PubMed

    Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier.

  11. Biological adaptations for functional features of language in the face of cultural evolution.

    PubMed

    Christiansen, Morten H; Reali, Florencia; Chater, Nick

    2011-04-01

    Although there may be no true language universals, it is nonetheless possible to discern several family resemblance patterns across the languages of the world. Recent work on the cultural evolution of language indicates the source of these patterns is unlikely to be an innate universal grammar evolved through biological adaptations for arbitrary linguistic features. Instead, it has been suggested that the patterns of resemblance emerge because language has been shaped by the brain, with individual languages representing different but partially overlapping solutions to the same set of nonlinguistic constraints. Here, we use computational simulations to investigate whether biological adaptation for functional features of language, deriving from cognitive and communicative constraints, may nonetheless be possible alongside rapid cultural evolution. Specifically, we focus on the Baldwin effect as an evolutionary mechanism by which previously learned linguistic features might become innate through natural selection across many generations of language users. The results indicate that cultural evolution of language does not necessarily prevent functional features of language from becoming genetically fixed, thus potentially providing a particularly informative source of constraints on cross-linguistic resemblance patterns.

  12. Application of computer-extracted breast tissue texture features in predicting false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Choi, Jae Y.; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2014-03-01

    Mammographic texture features have been shown to have value in breast cancer risk assessment. Previous models have also been developed that use computer-extracted mammographic features of breast tissue complexity to predict the risk of false-positive (FP) recall from breast cancer screening with digital mammography. This work details a novel locallyadaptive parenchymal texture analysis algorithm that identifies and extracts mammographic features of local parenchymal tissue complexity potentially relevant for false-positive biopsy prediction. This algorithm has two important aspects: (1) the adaptive nature of automatically determining an optimal number of region-of-interests (ROIs) in the image and each ROI's corresponding size based on the parenchymal tissue distribution over the whole breast region and (2) characterizing both the local and global mammographic appearances of the parenchymal tissue that could provide more discriminative information for FP biopsy risk prediction. Preliminary results show that this locallyadaptive texture analysis algorithm, in conjunction with logistic regression, can predict the likelihood of false-positive biopsy with an ROC performance value of AUC=0.92 (p<0.001) with a 95% confidence interval [0.77, 0.94]. Significant texture feature predictors (p<0.05) included contrast, sum variance and difference average. Sensitivity for false-positives was 51% at the 100% cancer detection operating point. Although preliminary, clinical implications of using prediction models incorporating these texture features may include the future development of better tools and guidelines regarding personalized breast cancer screening recommendations. Further studies are warranted to prospectively validate our findings in larger screening populations and evaluate their clinical utility.

  13. Structural damage identification via a combination of blind feature extraction and sparse representation classification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Nagarajaiah, Satish

    2014-03-01

    This paper addresses two problems in structural damage identification: locating damage and assessing damage severity, which are incorporated into the classification framework based on the theory of sparse representation (SR) and compressed sensing (CS). The sparsity nature implied in the classification problem itself is exploited, establishing a sparse representation framework for damage identification. Specifically, the proposed method consists of two steps: feature extraction and classification. In the feature extraction step, the modal features of both the test structure and the reference structure model are first blindly extracted by the unsupervised complexity pursuit (CP) algorithm. Then in the classification step, expressing the test modal feature as a linear combination of the bases of the over-complete reference feature dictionary—constructed by concatenating all modal features of all candidate damage classes—builds a highly underdetermined linear system of equations with an underlying sparse representation, which can be correctly recovered by ℓ1-minimization; the non-zero entry in the recovered sparse representation directly assigns the damage class which the test structure (feature) belongs to. The two-step CP-SR damage identification method alleviates the training process required by traditional pattern recognition based methods. In addition, the reference feature dictionary can be of small size by formulating the issues of locating damage and assessing damage extent as a two-stage procedure and by taking advantage of the robustness of the SR framework. Numerical simulations and experimental study are conducted to verify the developed CP-SR method. The problems of identifying multiple damage, using limited sensors and partial features, and the performance under heavy noise and random excitation are investigated, and promising results are obtained.

  14. Linearly Supporting Feature Extraction for Automated Estimation of Stellar Atmospheric Parameters

    NASA Astrophysics Data System (ADS)

    Li, Xiangru; Lu, Yu; Comte, Georges; Luo, Ali; Zhao, Yongheng; Wang, Yongjun

    2015-05-01

    We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)bs; third, estimate the atmospheric parameters {{T}{\\tt{eff} }}, log g, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both the wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate {{T}{\\tt{eff} }}, 62 features for log g, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Parameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log {{T}{\\tt{eff} }} (83 K for {{T}{\\tt{eff} }}), 0.2345 dex for log g, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log {{T}{\\tt{eff} }} (32 K for {{T}{\\tt{eff} }}), 0.0337 dex for log g, and 0.0268 dex for [Fe/H].

  15. Feature extraction for EEG-based brain-computer interfaces by wavelet packet best basis decomposition.

    PubMed

    Yang, Bang-hua; Yan, Guo-zheng; Yan, Rong-guo; Wu, Ting

    2006-12-01

    A method based on wavelet packet best basis decomposition (WPBBD) is investigated for the purpose of extracting features of electroencephalogram signals produced during motor imagery tasks in brain-computer interfaces. The method includes the following three steps. (1) Original signals are decomposed by wavelet packet transform (WPT) and a wavelet packet library can be formed. (2) The best basis for classification is selected from the library. (3) Subband energies included in the best basis are used as effective features. Three different motor imagery tasks are discriminated using the features. The WPBBD produces a 70.3% classification accuracy, which is 4.2% higher than that of the existing wavelet packet method.

  16. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, P.; Belmont, P.; Foufoula, E.

    2011-12-01

    High resolution topography derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive landforms in a way that was previously not possible. This provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topography data over large areas come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. This is particularly true in low relief landscapes since the topographic gradients are low and both the landscape and the channel network are often heavily modified by humans. Recently, a comprehensive framework was developed for the automatic extraction of geomorphic features (channel network, channel heads and channel morphology) from high resolution topographic data by combining nonlinear diffusion and geodesic minimization principles. The feature extraction method was packaged in a software called GeoNet (which is publicly available). In this talk, we focus on the application of GeoNet to a variety of landscapes, and, in particular, to flat and engineered landscapes where the method has been recently extended to perform automated channel morphometric analysis (including extraction of cross-sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation) and to differentiate between natural channels and manmade structures (including artificial ditches, roads and bridges across channels).

  17. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  18. Automated hand thermal image segmentation and feature extraction in the evaluation of rheumatoid arthritis.

    PubMed

    Snekhalatha, U; Anburajan, M; Sowmiya, V; Venkatraman, B; Menaka, M

    2015-04-01

    The aim of the study was (1) to perform an automated segmentation of hot spot regions of the hand from thermograph using the k-means algorithm and (2) to test the potential of features extracted from the hand thermograph and its measured skin temperature indices in the evaluation of rheumatoid arthritis. Thermal image analysis based on skin temperature measurement, heat distribution index and thermographic index was analyzed in rheumatoid arthritis patients and controls. The k-means algorithm was used for image segmentation, and features were extracted from the segmented output image using the gray-level co-occurrence matrix method. In metacarpo-phalangeal, proximal inter-phalangeal and distal inter-phalangeal regions, the calculated percentage difference in the mean values of skin temperatures was found to be higher in rheumatoid arthritis patients (5.3%, 4.9% and 4.8% in MCP3, PIP3 and DIP3 joints, respectively) as compared to the normal group. k-Means algorithm applied in the thermal imaging provided better segmentation results in evaluating the disease. In the total population studied, the measured mean average skin temperature of the MCP3 joint was highly correlated with most of the extracted features of the hand. In the total population studied, the statistical feature extracted parameters correlated significantly with skin surface temperature measurements and measured temperature indices. Hence, the developed computer-aided diagnostic tool using MATLAB could be used as a reliable method in diagnosing and analyzing the arthritis in hand thermal images.

  19. Computerized lung nodule detection using 3D feature extraction and learning based algorithms.

    PubMed

    Ozekes, Serhat; Osman, Onur

    2010-04-01

    In this paper, a Computer Aided Detection (CAD) system based on three-dimensional (3D) feature extraction is introduced to detect lung nodules. First, eight directional search was applied in order to extract regions of interests (ROIs). Then, 3D feature extraction was performed which includes 3D connected component labeling, straightness calculation, thickness calculation, determining the middle slice, vertical and horizontal widths calculation, regularity calculation, and calculation of vertical and horizontal black pixel ratios. To make a decision for each ROI, feed forward neural networks (NN), support vector machines (SVM), naive Bayes (NB) and logistic regression (LR) methods were used. These methods were trained and tested via k-fold cross validation, and results were compared. To test the performance of the proposed system, 11 cases, which were taken from Lung Image Database Consortium (LIDC) dataset, were used. ROC curves were given for all methods and 100% detection sensitivity was reached except naive Bayes.

  20. Extraction of Subsurface Features from InSAR-Derived Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Xiong, Siting; Muller, Jan-Peter

    2015-05-01

    Microwave remote sensing has the potential to be a beneficial tool to detect and analyse subsurface features in desert areas due to its penetration ability over hyperarid regions with extremely low loss and low bulk humidity. Global Digital Elevation Models (DEMs) of resolution up to 30 m are now publicly available, some of which show subsurface features over these hyperarid areas. This study compares different elevations detected by different EO microwave and lidar profilers and demonstrates their effectiveness in terms of extraction of subsurface features compared with that delineated in ALOS/PALSAR polarisation map. Results show that SRTM-C DEM agrees closely with ICESat elevations and that SRTM-C DEM clearly show paleoriver features, some of which can’t be observed in ALOS/PALSAR images affected by background backscatter. However, craterlike features are more recognisable in ALOS/PALSAR images compared with SRTM-C DEM.

  1. A Novel Hyperspectral Feature-Extraction Algorithm Based on Waveform Resolution for Raisin Classification.

    PubMed

    Zhao, Yun; Xu, Xing; He, Yong

    2015-12-01

    Near-infrared hyperspectral imaging technology was adopted in this study to discriminate among varieties of raisins produced in Xinjiang Uygur Autonomous Region, China. Eight varieties of raisins were used in the research, and the wavelengths of the hyperspectral images were from 900 to 1700 nm. A novel waveform resolution method is proposed to reduce the hyperspectral data and extract the features. The waveform-resolution method compresses the original hyperspectral data for one pixel into five amplitudes, five frequencies, and five phases for 15 feature values in all. A neural network was established with three layers-eight neurons for the first layer, three neurons for the hidden layer, and one neuron for the output layer-based on the 15 features used to determine the varieties of raisins. The accuracies of the model, which are presented as sensitivity, precision, and specificity, for the testing data set, are 93.38, 81.92, and 99.06%. This is higher than the accuracies of the model using a conventional principal component analysis feature-extracting method combined with a neural network, which has a sensitivity of 82.13%, precision of 82.22%, and specificity of 97.45%. The results indicate that the proposed waveform-resolution feature-extracting method combined with hyperspectral imaging technology is an efficient method for determining varieties of raisins.

  2. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  3. Features extraction of EMG signal using time domain analysis for arm rehabilitation device

    NASA Astrophysics Data System (ADS)

    Jali, Mohd Hafiz; Ibrahim, Iffah Masturah; Sulaima, Mohamad Fani; Bukhari, W. M.; Izzuddin, Tarmizi Ahmad; Nasir, Mohamad Na'im

    2015-05-01

    Rehabilitation device is used as an exoskeleton for people who had failure of their limb. Arm rehabilitation device may help the rehab program whom suffers from arm disability. The device that is used to facilitate the tasks of the program should improve the electrical activity in the motor unit and minimize the mental effort of the user. Electromyography (EMG) is the techniques to analyze the presence of electrical activity in musculoskeletal systems. The electrical activity in muscles of disable person is failed to contract the muscle for movements. In order to prevent the muscles from paralysis becomes spasticity, the force of movements should minimize the mental efforts. Therefore, the rehabilitation device should analyze the surface EMG signal of normal people that can be implemented to the device. The signal is collected according to procedure of surface electromyography for non-invasive assessment of muscles (SENIAM). The EMG signal is implemented to set the movements' pattern of the arm rehabilitation device. The filtered EMG signal was extracted for features of Standard Deviation (STD), Mean Absolute Value (MAV) and Root Mean Square (RMS) in time-domain. The extraction of EMG data is important to have the reduced vector in the signal features with less of error. In order to determine the best features for any movements, several trials of extraction methods are used by determining the features with less of errors. The accurate features can be use for future works of rehabilitation control in real-time.

  4. A Generic multi-dimensional feature extraction method using multiobjective genetic programming.

    PubMed

    Zhang, Yang; Rockett, Peter I

    2009-01-01

    In this paper, we present a generic feature extraction method for pattern classification using multiobjective genetic programming. This not only evolves the (near-)optimal set of mappings from a pattern space to a multi-dimensional decision space, but also simultaneously optimizes the dimensionality of that decision space. The presented framework evolves vector-to-vector feature extractors that maximize class separability. We demonstrate the efficacy of our approach by making statistically-founded comparisons with a wide variety of established classifier paradigms over a range of datasets and find that for most of the pairwise comparisons, our evolutionary method delivers statistically smaller misclassification errors. At very worst, our method displays no statistical difference in a few pairwise comparisons with established classifier/dataset combinations; crucially, none of the misclassification results produced by our method is worse than any comparator classifier. Although principally focused on feature extraction, feature selection is also performed as an implicit side effect; we show that both feature extraction and selection are important to the success of our technique. The presented method has the practical consequence of obviating the need to exhaustively evaluate a large family of conventional classifiers when faced with a new pattern recognition problem in order to attain a good classification accuracy.

  5. Age-Related Changes of Adaptive and Neuropsychological Features in Persons with Down Syndrome

    PubMed Central

    Ghezzo, Alessandro; Salvioli, Stefano; Solimando, Maria Caterina; Palmieri, Alice; Chiostergi, Chiara; Scurti, Maria; Lomartire, Laura; Bedetti, Federica; Cocchi, Guido; Follo, Daniela; Pipitone, Emanuela; Rovatti, Paolo; Zamberletti, Jessica; Gomiero, Tiziano; Castellani, Gastone; Franceschi, Claudio

    2014-01-01

    Down Syndrome (DS) is characterised by premature aging and an accelerated decline of cognitive functions in the vast majority of cases. As the life expectancy of DS persons is rapidly increasing, this decline is becoming a dramatic health problem. The aim of this study was to thoroughly evaluate a group of 67 non-demented persons with DS of different ages (11 to 66 years), from a neuropsychological, neuropsychiatric and psychomotor point of view in order to evaluate in a cross-sectional study the age-related adaptive and neuropsychological features, and to possibly identify early signs predictive of cognitive decline. The main finding of this study is that both neuropsychological functions and adaptive skills are lower in adult DS persons over 40 years old, compared to younger ones. In particular, language and short memory skills, frontal lobe functions, visuo-spatial abilities and adaptive behaviour appear to be the more affected domains. A growing deficit in verbal comprehension, along with social isolation, loss of interest and greater fatigue in daily tasks, are the main features found in older, non demented DS persons evaluated in our study. It is proposed that these signs can be alarm bells for incipient dementia, and that neuro-cognitive rehabilitation and psycho-pharmacological interventions must start as soon as the fourth decade (or even earlier) in DS persons, i.e. at an age where interventions can have the greatest efficacy. PMID:25419980

  6. Age-related changes of adaptive and neuropsychological features in persons with Down Syndrome.

    PubMed

    Ghezzo, Alessandro; Salvioli, Stefano; Solimando, Maria Caterina; Palmieri, Alice; Chiostergi, Chiara; Scurti, Maria; Lomartire, Laura; Bedetti, Federica; Cocchi, Guido; Follo, Daniela; Pipitone, Emanuela; Rovatti, Paolo; Zamberletti, Jessica; Gomiero, Tiziano; Castellani, Gastone; Franceschi, Claudio

    2014-01-01

    Down Syndrome (DS) is characterised by premature aging and an accelerated decline of cognitive functions in the vast majority of cases. As the life expectancy of DS persons is rapidly increasing, this decline is becoming a dramatic health problem. The aim of this study was to thoroughly evaluate a group of 67 non-demented persons with DS of different ages (11 to 66 years), from a neuropsychological, neuropsychiatric and psychomotor point of view in order to evaluate in a cross-sectional study the age-related adaptive and neuropsychological features, and to possibly identify early signs predictive of cognitive decline. The main finding of this study is that both neuropsychological functions and adaptive skills are lower in adult DS persons over 40 years old, compared to younger ones. In particular, language and short memory skills, frontal lobe functions, visuo-spatial abilities and adaptive behaviour appear to be the more affected domains. A growing deficit in verbal comprehension, along with social isolation, loss of interest and greater fatigue in daily tasks, are the main features found in older, non demented DS persons evaluated in our study. It is proposed that these signs can be alarm bells for incipient dementia, and that neuro-cognitive rehabilitation and psycho-pharmacological interventions must start as soon as the fourth decade (or even earlier) in DS persons, i.e. at an age where interventions can have the greatest efficacy.

  7. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  8. New feature extraction approach for epileptic EEG signal detection using time-frequency distributions.

    PubMed

    Guerrero-Mosquera, Carlos; Trigueros, Armando Malanda; Franco, Jorge Iriarte; Navia-Vázquez, Angel

    2010-04-01

    This paper describes a new method to identify seizures in electroencephalogram (EEG) signals using feature extraction in time-frequency distributions (TFDs). Particularly, the method extracts features from the Smoothed Pseudo Wigner-Ville distribution using tracks estimated from the McAulay-Quatieri sinusoidal model. The proposed features are the length, frequency, and energy of the principal track. We evaluate the proposed scheme using several datasets and we compute sensitivity, specificity, F-score, receiver operating characteristics (ROC) curve, and percentile bootstrap confidence to conclude that the proposed scheme generalizes well and is a suitable approach for automatic seizure detection at a moderate cost, also opening the possibility of formulating new criteria to detect, classify or analyze abnormal EEGs.

  9. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images.

    PubMed

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction.

  10. Graph theory for feature extraction and classification: a migraine pathology case study.

    PubMed

    Jorge-Hernandez, Fernando; Garcia Chimeno, Yolanda; Garcia-Zapirain, Begonya; Cabrera Zubizarreta, Alberto; Gomez Beldarrain, Maria Angeles; Fernandez-Ruanova, Begonya

    2014-01-01

    Graph theory is also widely used as a representational form and characterization of brain connectivity network, as is machine learning for classifying groups depending on the features extracted from images. Many of these studies use different techniques, such as preprocessing, correlations, features or algorithms. This paper proposes an automatic tool to perform a standard process using images of the Magnetic Resonance Imaging (MRI) machine. The process includes pre-processing, building the graph per subject with different correlations, atlas, relevant feature extraction according to the literature, and finally providing a set of machine learning algorithms which can produce analyzable results for physicians or specialists. In order to verify the process, a set of images from prescription drug abusers and patients with migraine have been used. In this way, the proper functioning of the tool has been proved, providing results of 87% and 92% of success depending on the classifier used.

  11. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  12. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  13. Unsupervised boundary delineation of spinal neural foramina using a multi-feature and adaptive spectral segmentation.

    PubMed

    He, Xiaoxu; Zhang, Heye; Landis, Mark; Sharma, Manas; Warrington, James; Li, Shuo

    2017-02-01

    As a common disease in the elderly, neural foramina stenosis (NFS) brings a significantly negative impact on the quality of life due to its symptoms including pain, disability, fall risk and depression. Accurate boundary delineation is essential to the clinical diagnosis and treatment of NFS. However, existing clinical routine is extremely tedious and inefficient due to the requirement of physicians' intensively manual delineation. Automated delineation is highly needed but faces big challenges from the complexity and variability in neural foramina images. In this paper, we propose a pure image-driven unsupervised boundary delineation framework for the automated neural foramina boundary delineation. This framework is based on a novel multi-feature and adaptive spectral segmentation (MFASS) algorithm. MFASS firstly utilizes the combination of region and edge features to generate reliable spectral features with a good separation between neural foramina and its surroundings, then estimates an optimal separation threshold for each individual image to separate neural foramina from its surroundings. This self-adjusted optimal separation threshold, estimated from spectral features, successfully overcome the diverse appearance and shape variations. With the robustness from the multi-feature fusion and the flexibility from the adaptively optimal separation threshold estimation, the proposed framework, based on MFASS, provides an automated and accurate boundary delineation. Validation was performed in 280 neural foramina MR images from 56 clinical subjects. Our method was benchmarked with manual boundary obtained by experienced physicians. Results demonstrate that the proposed method enjoys a high and stable consistency with experienced physicians (Dice: 90.58% ± 2.79%; SMAD: 0.5657 ± 0.1544 mm). Therefore, the proposed framework enables an efficient and accurate clinical tool in the diagnosis of neural foramina stenosis.

  14. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  15. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  16. Topology adaptive vessel network skeleton extraction with novel medialness measuring function.

    PubMed

    Zhu, Wen-Bo; Li, Bin; Tian, Lian-Fang; Li, Xiang-Xia; Chen, Qing-Lin

    2015-09-01

    Vessel tree skeleton extraction is widely applied in vascular structure segmentation, however, conventional approaches often suffer from the adjacent interferences and poor topological adaptability. To avoid these problems, a robust, topology adaptive tree-like structure skeleton extraction framework is proposed in this paper. Specifically, to avoid the adjacent interferences, a local message passing procedure called Gaussian affinity voting (GAV) is proposed to realize adaptive scale-growing of vessel voxels. Then the medialness measuring function (MMF) based on GAV, namely GAV-MMF, is constructed to extract medialness patterns robustly. In order to improve topological adaptability, a level-set graph embedded with GAV-MMF is employed to build initial curve skeletons without any user interaction. Furthermore, the GAV-MMF is embedded in stretching open active contours (SOAC) to drive the initial curves to the expected location, maintaining smoothness and continuity. In addition, to provide an accurate and smooth final skeleton tree topology, topological checks and skeleton network reconfiguration is proposed. The continuity and scalability of this method is validated experimentally on synthetic and clinical images for multi-scale vessels. Experimental results show that the proposed method achieves acceptable topological adaptability for skeleton extraction of vessel trees.

  17. [Feature extraction of motor imagery electroencephalography based on time-frequency-space domains].

    PubMed

    Wang, Yueru; Li, Xin; Li, Honghong; Shao, Chengcheng; Ying, Lijuan; Wu, Shuicai

    2014-10-01

    The purpose of using brain-computer interface (BCI) is to build a bridge between brain and computer for the disable persons, in order to help them to communicate with the outside world. Electroencephalography (EEG) has low signal to noise ratio (SNR), and there exist some problems in the traditional methods for the feature extraction of EEG, such as low classification accuracy, lack of spatial information and huge amounts of features. To solve these problems, we proposed a new method based on time domain, frequency domain and space domain. In this study, independent component analysis (ICA) and wavelet transform were used to extract the temporal, spectral and spatial features from the original EEG signals, and then the extracted features were classified with the method combined support vector machine (SVM) with genetic algorithm (GA). The proposed method displayed a better classification performance, and made the mean accuracy of the Graz datasets in the BCI Competitions of 2003 reach 96%. The classification results showed that the proposed method with the three domains could effectively overcome the drawbacks of the traditional methods based solely on time-frequency domain when the EEG signals were used to describe the characteristics of the brain electrical signals.

  18. Bayesian Nonnegative CP Decomposition-based Feature Extraction Algorithm for Drowsiness Detection.

    PubMed

    Qian, Dong; Wang, Bei; Qing, Yun; Zhang, Tao; Zhang, Yu; Wang, Xing; Nakamura, Masatoshi

    2016-10-19

    Daytime short nap involves physiological processes, such as alertness, drowsiness and sleep. The study of the relationship between drowsiness and nap based on physiological signals is a great way to have a better understanding of the periodical rhymes of physiological states. A model of Bayesian nonnegative CP decomposition (BNCPD) was proposed to extract common multiway features from the group-level electroencephalogram (EEG) signals. As an extension of the nonnegative CP decomposition, the BNCPD model involves prior distributions of factor matrices, while the underlying CP rank could be determined automatically based on a Bayesian nonparametric approach. In terms of computational speed, variational inference was applied to approximate the posterior distributions of unknowns. Extensive simulations on the synthetic data illustrated the capability of our model to recover the true CP rank. As a real-world application, the performance of drowsiness detection during daytime short nap by using the BNCPD-based features was compared with that of other traditional feature extraction methods. Experimental results indicated that the BNCPD model outperformed other methods for feature extraction in terms of two evaluation metrics, as well as different parameter settings. Our approach is likely to be a useful tool for automatic CP rank determination and offering a plausible multiway physiological information of individual states.

  19. Application of Wavelet Analysis in EMG Feature Extraction for Pattern Classification

    NASA Astrophysics Data System (ADS)

    Phinyomark, A.; Limsakul, C.; Phukpattaranont, P.

    2011-01-01

    Nowadays, analysis of electromyography (EMG) signal using wavelet transform is one of the most powerful signal processing tools. It is widely used in the EMG recognition system. In this study, we have investigated usefulness of extraction of the EMG features from multiple-level wavelet decomposition of the EMG signal. Different levels of various mother wavelets were used to obtain the useful resolution components from the EMG signal. Optimal EMG resolution component (sub-signal) was selected and then the reconstruction of the useful information signal was done. Noise and unwanted EMG parts were eliminated throughout this process. The estimated EMG signal that is an effective EMG part was extracted with the popular features, i.e. mean absolute value and root mean square, in order to improve quality of class separability. Two criteria used in the evaluation are the ratio of a Euclidean distance to a standard deviation and the scatter graph. The results show that only the EMG features extracted from reconstructed EMG signals of the first-level and the second-level detail coefficients yield the improvement of class separability in feature space. It will ensure that the result of pattern classification accuracy will be as high as possible. Optimal wavelet decomposition is obtained using the seventh order of Daubechies wavelet and the forth-level wavelet decomposition.

  20. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  1. Dynamic-Feature Extraction, Attribution and Reconstruction (DEAR) Method for Power System Model Reduction

    SciTech Connect

    Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.

    2014-09-04

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.

  2. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  3. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  4. Intelligibility Evaluation of Pathological Speech through Multigranularity Feature Extraction and Optimization

    PubMed Central

    Ma, Lin; Zhang, Mancai

    2017-01-01

    Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S-transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S-transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation. PMID:28194222

  5. Intelligibility Evaluation of Pathological Speech through Multigranularity Feature Extraction and Optimization.

    PubMed

    Fang, Chunying; Li, Haifeng; Ma, Lin; Zhang, Mancai

    2017-01-01

    Pathological speech usually refers to speech distortion resulting from illness or other biological insults. The assessment of pathological speech plays an important role in assisting the experts, while automatic evaluation of speech intelligibility is difficult because it is usually nonstationary and mutational. In this paper, we carry out an independent innovation of feature extraction and reduction, and we describe a multigranularity combined feature scheme which is optimized by the hierarchical visual method. A novel method of generating feature set based on S-transform and chaotic analysis is proposed. There are BAFS (430, basic acoustics feature), local spectral characteristics MSCC (84, Mel S-transform cepstrum coefficients), and chaotic features (12). Finally, radar chart and F-score are proposed to optimize the features by the hierarchical visual fusion. The feature set could be optimized from 526 to 96 dimensions based on NKI-CCRT corpus and 104 dimensions based on SVD corpus. The experimental results denote that new features by support vector machine (SVM) have the best performance, with a recognition rate of 84.4% on NKI-CCRT corpus and 78.7% on SVD corpus. The proposed method is thus approved to be effective and reliable for pathological speech intelligibility evaluation.

  6. A Rough Transform Technique for Extracting Lead Features from Sea Ice Imagery

    DTIC Science & Technology

    1989-07-11

    Funding Numbers. A ough Trclnsform lehiteFrExtracting Lead Features Program Efenrent~ 62No 5 From Sea Ice Imagery Prolec t No 13 219RK Author(s...compilIing l ead sta t ist ics from i _ge r .-. A Hlough transform technique for the semi-automated extraction of I1 at ’,rient.-ition and spacing is...Page. of Abstract. UI c_]ass i i (d [I/n g- sci f i fd Unc Iass iiHod______________ M,.NIS CRA&I 0TiC TAb C U;,jnflOj ’ t I d C]’ r A flROIIH

  7. High Resolution Urban Feature Extraction for Global Population Mapping using High Performance Computing

    SciTech Connect

    Vijayaraj, Veeraraghavan; Bright, Eddie A; Bhaduri, Budhendra L

    2007-01-01

    The advent of high spatial resolution satellite imagery like Quick Bird (0.6 meter) and IKONOS (1 meter) has provided a new data source for high resolution urban land cover mapping. Extracting accurate urban regions from high resolution images has many applications and is essential to the population mapping efforts of Oak Ridge National Laboratory's (ORNL) LandScan population distribution program. This paper discusses an automated parallel algorithm that has been implemented on a high performance computing environment to extract urban regions from high resolution images using texture and spectral features

  8. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals

    NASA Astrophysics Data System (ADS)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L.

    2016-05-01

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  9. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  10. Zone Based Hybrid Feature Extraction Algorithm for Handwritten Numeral Recognition of South Indian Scripts

    NASA Astrophysics Data System (ADS)

    Rajashekararadhya, S. V.; Ranjan, P. Vanaja

    India is a multi-lingual multi script country, where eighteen official scripts are accepted and have over hundred regional languages. In this paper we propose a zone based hybrid feature extraction algorithm scheme towards the recognition of off-line handwritten numerals of south Indian scripts. The character centroid is computed and the image (character/numeral) is further divided in to n equal zones. Average distance and Average angle from the character centroid to the pixels present in the zone are computed (two features). Similarly zone centroid is computed (two features). This procedure is repeated sequentially for all the zones/grids/boxes present in the numeral image. There could be some zones that are empty, and then the value of that particular zone image value in the feature vector is zero. Finally 4*n such features are extracted. Nearest neighbor classifier is used for subsequent classification and recognition purpose. We obtained 97.55 %, 94 %, 92.5% and 95.2 % recognition rate for Kannada, Telugu, Tamil and Malayalam numerals respectively.

  11. The Motions and Morphologies of cloud features on Neptune: continued monitoring with Keck Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Martin, S. C.; de Pater, I.; Gibbard, S. G.; Macintosh, B. A.; Roe, H. G.; Max, C. E.

    2002-09-01

    We present near infrared images taken in the H band (1.4-1.8 microns) using the newly commissioned NIRC2 at the W. M. Keck II telescope as part of a continuing program to monitor the atmospheric dynamics of Neptune using Adaptive Optics. These images with a resolution of .06 arcseconds reveal five infrared bright groups of features. Two groups of features (30-40 deg N and 20-50 deg S) are confined in latitude but span all longitudes creating bands around the planet. Small cloud morphology and relative motions in the wide Southern band (20-50 deg S) identify apparent cloud shearing events and differences in relative speeds within latitude bands. One localized group of features (30 deg N) shows interesting morphologies with marked departures from lines of latitude. Another localized group of South Polar features (70 deg S) show changes in morphology from a teardrop to a train of clouds to an arc of features during three years of observations. The final group of features is spatially diffuse and spans many latitude lines but is tightly confined in longitude. This research was supported in part by the STC Program of the National Science Foundation under Agreement No. AST-9876783, and in part under the auspices of the US Department of Energy at Lawrence Livermore National Laboratory, Univ. of Calif. under contract No. W-7405-Eng-48. Data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W.M. Keck Foundation.

  12. Product spectrum matrix feature extraction and recognition of radar deception jamming

    NASA Astrophysics Data System (ADS)

    Tian, Xiao; Tang, Bin; Gui, Guan

    2013-12-01

    A deception jamming recognition algorithm is proposed based on product spectrum matrix (SPM). Firstly, the product spectral in the different pulse repetition interval (PRI) is calculated, and the product spectral of frequency-slow time is arranged into a two-dimensional matrix. Secondly, non-negative matrix factorisation (NMF) is used to extract the features, and further the separability of the characteristic parameters is analysed by the F-Ratio. Finally, the best features are selected to recognise the deception jamming. The experimental results show that the average recognition accuracy of the proposed deception jamming algorithm is higher than 90% when SNR is greater than 6dB.

  13. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  14. Topology-based Simplification for Feature Extraction from 3D Scalar Fields

    SciTech Connect

    Gyulassy, A; Natarajan, V; Pascucci, V; Bremer, P; Hamann, B

    2005-10-13

    This paper describes a topological approach for simplifying continuous functions defined on volumetric domains. We present a combinatorial algorithm that simplifies the Morse-Smale complex by repeated application of two atomic operations that removes pairs of critical points. The Morse-Smale complex is a topological data structure that provides a compact representation of gradient flows between critical points of a function. Critical points paired by the Morse-Smale complex identify topological features and their importance. The simplification procedure leaves important critical points untouched, and is therefore useful for extracting desirable features. We also present a visualization of the simplified topology.

  15. Feature-Motivated Simplified Adaptive PCNN-Based Medical Image Fusion Algorithm in NSST Domain.

    PubMed

    Ganasala, Padma; Kumar, Vinod

    2016-02-01

    Multimodality medical image fusion plays a vital role in diagnosis, treatment planning, and follow-up studies of various diseases. It provides a composite image containing critical information of source images required for better localization and definition of different organs and lesions. In the state-of-the-art image fusion methods based on nonsubsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN), authors have used normalized coefficient value to motivate the PCNN-processing both low-frequency (LF) and high-frequency (HF) sub-bands. This makes the fused image blurred and decreases its contrast. The main objective of this work is to design an image fusion method that gives the fused image with better contrast, more detail information, and suitable for clinical use. We propose a novel image fusion method utilizing feature-motivated adaptive PCNN in NSST domain for fusion of anatomical images. The basic PCNN model is simplified, and adaptive-linking strength is used. Different features are used to motivate the PCNN-processing LF and HF sub-bands. The proposed method is extended for fusion of functional image with an anatomical image in improved nonlinear intensity hue and saturation (INIHS) color model. Extensive fusion experiments have been performed on CT-MRI and SPECT-MRI datasets. Visual and quantitative analysis of experimental results proved that the proposed method provides satisfactory fusion outcome compared to other image fusion methods.

  16. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    SciTech Connect

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  17. Radiomics: extracting more information from medical images using advanced feature analysis.

    PubMed

    Lambin, Philippe; Rios-Velazquez, Emmanuel; Leijenaar, Ralph; Carvalho, Sara; van Stiphout, Ruud G P M; Granton, Patrick; Zegers, Catharina M L; Gillies, Robert; Boellard, Ronald; Dekker, André; Aerts, Hugo J W L

    2012-03-01

    Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics--the high-throughput extraction of large amounts of image features from radiographic images--addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory.

  18. Cartographic feature extraction with integrated SIR-B and Landsat TM images

    NASA Technical Reports Server (NTRS)

    Welch, R.; Ehlers, Manfred

    1988-01-01

    A digital cartographic multisensor image database of excellent geometry and improved resolution was created by registering SIR-B images to a rectified Landsat TM reference image and applying intensity-hue-saturation enhancement techniques. When evaluated against geodetic control, RMSE(XY) values of approximately + or - 20 m were noted for the composite SIR-B/TM images. The completeness of cartographic features extracted from the composite images exceeded those obtained from separate SIR-B and TM image data sets by approximately 10 and 25 percent, respectively, indicating that the composite images may prove suitable for planimetric mapping at a scale of 1:100,000 or smaller. At present, the most effective method for extracting cartographic information involves digitizing features directly from the image processing display screen.

  19. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  20. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.; Suhardjo; Susanto, Adhi

    2013-04-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  1. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  2. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  3. Special object extraction from medieval books using superpixels and bag-of-features

    NASA Astrophysics Data System (ADS)

    Yang, Ying; Rushmeier, Holly

    2017-01-01

    We propose a method to extract special objects in images of medieval books, which generally represent, for example, figures and capital letters. Instead of working on the single-pixel level, we consider superpixels as the basic classification units for improved time efficiency. More specifically, we classify superpixels into different categories/objects by using a bag-of-features approach, where a superpixel category classifier is trained with the local features of the superpixels of the training images. With the trained classifier, we are able to assign the category labels to the superpixels of a historical document image under test. Finally, special objects can easily be identified and extracted after analyzing the categorization results. Experimental results demonstrate that, as compared to the state-of-the-art algorithms, our method provides comparable performance for some historical books but greatly outperforms them in terms of generality and computational time.

  4. Adaptive quasi-Newton algorithm for source extraction via CCA approach.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian; Feng, Da-Zheng

    2014-04-01

    This paper addresses the problem of adaptive source extraction via the canonical correlation analysis (CCA) approach. Based on Liu's analysis of CCA approach, we propose a new criterion for source extraction, which is proved to be equivalent to the CCA criterion. Then, a fast and efficient online algorithm using quasi-Newton iteration is developed. The stability of the algorithm is also analyzed using Lyapunov's method, which shows that the proposed algorithm asymptotically converges to the global minimum of the criterion. Simulation results are presented to prove our theoretical analysis and demonstrate the merits of the proposed algorithm in terms of convergence speed and successful rate for source extraction.

  5. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, Paola; Belmont, Patrick; Foufoula-Georgiou, Efi

    2012-03-01

    High-resolution topographic data derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive areas in a way that was previously not possible. Availability of this data provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models, and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topographic data come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. Low-relief landscapes are particularly challenging because topographic gradients are low, and in many places both the landscape and the channel network have been heavily modified by humans. This is especially true for agricultural landscapes, which dominate the midwestern United States. The goal of this work is to address several issues related to feature extraction in flat lands by using GeoNet, a recently developed method based on nonlinear multiscale filtering and geodesic optimization for automatic extraction of geomorphic features (channel heads and channel networks) from high-resolution topographic data. Here we test the ability of GeoNet to extract channel networks in flat and human-impacted landscapes using 3 m lidar data for the Le Sueur River Basin, a 2880 km2 subbasin of the Minnesota River Basin. We propose a curvature analysis to differentiate between channels and manmade structures that are not part of the river network, such as roads and bridges. We document that Laplacian curvature more effectively distinguishes channels in flat, human-impacted landscapes compared with geometric curvature. In addition, we develop a method for performing automated channel morphometric analysis including extraction of cross sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation. Using

  6. Effects of LiDAR Derived DEM Resolution on Hydrographic Feature Extraction

    NASA Astrophysics Data System (ADS)

    Yang, P.; Ames, D. P.; Glenn, N. F.; Anderson, D.

    2010-12-01

    This paper examines the effect of LiDAR-derived digital elevation model (DEM) resolution on digitally extracted stream networks with respect to known stream channel locations. Two study sites, Reynolds Creek Experimental Watershed (RCEW) and Dry Creek Experimental Watershed (DCEW), which represent terrain characteristics for lower and intermediate elevation mountainous watersheds in the Intermountain West, were selected as study areas for this research. DEMs reflecting bare earth ground were created from the LiDAR observations at a series of raster cell sizes (from 1 m to 60 m) using spatial interpolation techniques. The effect of DEM resolution on resulting hydrographic feature (specifically stream channel) derivation was studied. Stream length, watershed area, and sinuosity were explored at each of the raster cell sizes. Also, variation from known channel location as estimated by root mean square error (RMSE) between surveyed channel location and extracted channel was computed for each of the DEMs and extracted stream networks. As expected, the results indicate that the DEM based hydrographic extraction process provides more detailed hydrographic features at a finer resolution. RMSE between the known channel location and modeled locations generally increased with larger cell size DEM with a greater effect in the larger RCEW. Sensitivity analyses on sinuosity demonstrated that the resulting shape of streams obtained from LiDAR data matched best with the reference data at an intermediate cell size instead of highest resolution, which is at a range of cell size from 5 to 10 m likely due to original point spacing, terrain characteristics, and LiDAR noise influence. More importantly, the absolute sinuosity deviation displayed a smallest value at the cell size of 10 m in both experimental watersheds, which suggests that optimal cell size for LiDAR-derived DEMs used for hydrographic feature extraction is 10 m.

  7. Using the erroneous data clustering to improve the feature extraction weights of original image algorithms

    NASA Astrophysics Data System (ADS)

    Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao

    2017-02-01

    Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network

  8. Performance Comparison of Feature Extraction Algorithms for Target Detection and Classification

    DTIC Science & Technology

    2013-01-01

    Succi, D. Clapp, R. Gampert, and G. Prado, “ Footstep detection and tracking,” Unattended Ground Sensor Technologies and Applications III, vol. 4393... Detection and Classification⋆ Soheil Bahrampour1 Asok Ray2 Soumalya Sarkar2 Thyagaraju Damarla3 Nasser M. Nasrabadi3 Keywords: Feature Extraction...rithm, symbolic dynamic filtering (SDF), is investigated for target detection and classification by using unmanned ground sensors (UGS). In SDF, sensor

  9. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  10. Identification and Classification of OFDM Based Signals Using Preamble Correlation and Cyclostationary Feature Extraction

    DTIC Science & Technology

    2009-09-01

    rapidly advancing technologies of wireless communication networks are providing enormous opportunities. A large number of users in emerging markets ...base element of the 802.16 frame is the physical slot, having the duration 4ps s t f  (2.10) where sf is the sampling frequency. The number of ...CLASSIFICATION OF OFDM BASED SIGNALS USING PREAMBLE CORRELATION AND CYCLOSTATIONARY FEATURE EXTRACTION by Steven R. Schnur September 2009

  11. Structural feature extraction protocol for classifying reversible membrane binding protein domains.

    PubMed

    Källberg, Morten; Lu, Hui

    2009-01-01

    Machine learning based classification protocols for automated function annotation of protein structures have in many instances proven superior to simpler sequence based procedures. Here we present an automated method for extracting features from protein structures by construction of surface patches to be used in such protocols. The utility of the developed patch-growing procedure is exemplified by its ability to identify reversible membrane binding domains from the C1, C2, and PH families.

  12. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  13. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  14. Exploration of Genetic Programming Optimal Parameters for Feature Extraction from Remote Sensed Imagery

    NASA Astrophysics Data System (ADS)

    Gao, P.; Shetty, S.; Momm, H. G.

    2014-11-01

    Evolutionary computation is used for improved information extraction from high-resolution satellite imagery. The utilization of evolutionary computation is based on stochastic selection of input parameters often defined in a trial-and-error approach. However, exploration of optimal input parameters can yield improved candidate solutions while requiring reduced computation resources. In this study, the design and implementation of a system that investigates the optimal input parameters was researched in the problem of feature extraction from remotely sensed imagery. The two primary assessment criteria were the highest fitness value and the overall computational time. The parameters explored include the population size and the percentage and order of mutation and crossover. The proposed system has two major subsystems; (i) data preparation: the generation of random candidate solutions; and (ii) data processing: evolutionary process based on genetic programming, which is used to spectrally distinguish the features of interest from the remaining image background of remote sensed imagery. The results demonstrate that the optimal generation number is around 1500, the optimal percentage of mutation and crossover ranges from 35% to 40% and 5% to 0%, respectively. Based on our findings the sequence that yielded better results was mutation over crossover. These findings are conducive to improving the efficacy of utilizing genetic programming for feature extraction from remotely sensed imagery.

  15. Automatic Road Area Extraction from Printed Maps Based on Linear Feature Detection

    NASA Astrophysics Data System (ADS)

    Callier, Sebastien; Saito, Hideo

    Raster maps are widely available in the everyday life, and can contain a huge amount of information of any kind using labels, pictograms, or color code e.g. However, it is not an easy task to extract roads from those maps due to those overlapping features. In this paper, we focus on an automated method to extract roads by using linear features detection to search for seed points having a high probability to belong to roads. Those linear features are lines of pixels of homogenous color in each direction around each pixel. After that, the seeds are then expanded before choosing to keep or to discard the extracted element. Because this method is not mainly based on color segmentation, it is also suitable for handwritten maps for example. The experimental results demonstrate that in most cases our method gives results similar to usual methods without needing any previous data or user input, but do need some knowledge on the target maps; and does work with handwritten maps if drawn following some basic rules whereas usual methods fail.

  16. Extracting Features from an Electrical Signal of a Non-Intrusive Load Monitoring System

    NASA Astrophysics Data System (ADS)

    Figueiredo, Marisa B.; de Almeida, Ana; Ribeiro, Bernardete; Martins, António

    Improving energy efficiency by monitoring household electrical consumption is of significant importance with the present-day climate change concerns. A solution for the electrical consumption management problem is the use of a non-intrusive load monitoring system (NILM). This system captures the signals from the aggregate consumption, extracts the features from these signals and classifies the extracted features in order to identify the switched on appliances. An effective device identification (ID) requires a signature to be assigned for each appliance. Moreover, to specify an ID for each device, signal processing techniques are needed for extracting the relevant features. This paper describes a technique for the steady-states recognition in an electrical digital signal as the first stage for the implementation of an innovative NILM. Furthermore, the final goal is to develop an intelligent system for the identification of the appliances by automated learning. The proposed approach is based on the ratio value between rectangular areas defined by the signal samples. The computational experiments show the method effectiveness for the accurate steady-states identification in the electrical input signals.

  17. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  18. A parametric feature extraction and classification strategy for brain-computer interfacing.

    PubMed

    Burke, Dave P; Kelly, Simon P; de Chazal, Philip; Reilly, Richard B; Finucane, Ciarán

    2005-03-01

    Parametric modeling strategies are explored in conjunction with linear discriminant analysis for use in an electroencephalogram (EEG)-based brain-computer interface (BCI). A left/right self-paced typing exercise is analyzed by extending the usual autoregressive (AR) model for EEG feature extraction with an AR with exogenous input (ARX) model for combined filtering and feature extraction. The ensemble averaged Bereitschafts potential (an event related potential preceding the onset of movement) forms the exogenous signal input to the ARX model. Based on trials with six subjects, the ARX case of modeling both the signal and noise was found to be considerably more effective than modeling the noise alone (common in BCI systems) with the AR method yielding a classification accuracy of 52.8+/-4.8% and the ARX method an accuracy of 79.1+/-3.9 % across subjects. The results suggest a role for ARX-based feature extraction in BCIs based on evoked and event-related potentials.

  19. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  20. Extracting Product Features and Opinion Words Using Pattern Knowledge in Customer Reviews

    PubMed Central

    Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  1. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  2. Extraction of enclosure culture area from SPOT-5 image based on texture feature

    NASA Astrophysics Data System (ADS)

    Tang, Wei; Zhao, Shuhe; Ma, Ronghua; Wang, Chunhong; Zhang, Shouxuan; Li, Xinliang

    2007-06-01

    The east Taihu lake region is characterized by high-density and large areas of enclosure culture area which tend to cause eutrophication of the lake and worsen the quality of its water. This paper takes an area (380×380) of the east Taihu Lake from image as an example and discusses the extraction method of combing texture feature of high resolution image with spectrum information. Firstly, we choose the best combination bands of 1, 3, 4 according to the principles of the maximal entropy combination and OIF index. After applying algorithm of different bands and principal component analysis (PCA) transformation, we realize dimensional reduction and data compression. Subsequently, textures of the first principal component image are analyzed using Gray Level Co-occurrence Matrices (GLCM) getting statistic Eigen values of contrast, entropy and mean. The mean Eigen value is fixed as an optimal index and a appropriate conditional thresholds of extraction are determined. Finally, decision trees are established realizing the extraction of enclosure culture area. Combining the spectrum information with the spatial texture feature, we obtain a satisfied extracted result and provide a technical reference for a wide-spread survey of the enclosure culture area.

  3. Alertness Modulates Conflict Adaptation and Feature Integration in an Opposite Way

    PubMed Central

    Chen, Jia; Huang, Xiting; Chen, Antao

    2013-01-01

    Previous studies show that the congruency sequence effect can result from both the conflict adaptation effect (CAE) and feature integration effect which can be observed as the repetition priming effect (RPE) and feature overlap effect (FOE) depending on different experimental conditions. Evidence from neuroimaging studies suggests that a close correlation exists between the neural mechanisms of alertness-related modulations and the congruency sequence effect. However, little is known about whether and how alertness mediates the congruency sequence effect. In Experiment 1, the Attentional Networks Test (ANT) and a modified flanker task were used to evaluate whether the alertness of the attentional functions had a correlation with the CAE and RPE. In Experimental 2, the ANT and another modified flanker task were used to investigate whether alertness of the attentional functions correlate with the CAE and FOE. In Experiment 1, through the correlative analysis, we found a significant positive correlation between alertness and the CAE, and a negative correlation between the alertness and the RPE. Moreover, a significant negative correlation existed between CAE and RPE. In Experiment 2, we found a marginally significant negative correlation between the CAE and the RPE, but the correlation between alertness and FOE, CAE and FOE was not significant. These results suggest that alertness can modulate conflict adaptation and feature integration in an opposite way. Participants at the high alerting level group may tend to use the top-down cognitive processing strategy, whereas participants at the low alerting level group tend to use the bottom-up processing strategy. PMID:24250824

  4. Temporal Feature Extraction from DCE-MRI to Identify Poorly Perfused Subvolumes of Tumors Related to Outcomes of Radiation Therapy in Head and Neck Cancer

    PubMed Central

    You, Daekeun; Aryal, Madhava; Samuels, Stuart E.; Eisbruch, Avraham; Cao, Yue

    2017-01-01

    This study aimed to develop an automated model to extract temporal features from DCE-MRI in head-and-neck (HN) cancers to localize significant tumor subvolumes having low blood volume (LBV) for predicting local and regional failure after chemoradiation therapy. Temporal features were extracted from time-intensity curves to build classification model for differentiating voxels with LBV from those with high BV. Support vector machine (SVM) classification was trained on the extracted features for voxel classification. Subvolumes with LBV were then assembled from the classified voxels with LBV. The model was trained and validated on independent datasets created from 456 873 DCE curves. The resultant subvolumes were compared to ones derived by a 2-step method via pharmacokinetic modeling of blood volume, and evaluated for classification accuracy and volumetric similarity by DSC. The proposed model achieved an average voxel-level classification accuracy and DSC of 82% and 0.72, respectively. Also, the model showed tolerance on different acquisition parameters of DCE-MRI. The model could be directly used for outcome prediction and therapy assessment in radiation therapy of HN cancers, or even supporting boost target definition in adaptive clinical trials with further validation. The model is fully automatable, extendable, and scalable to extract temporal features of DCE-MRI in other tumors. PMID:28111634

  5. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  6. DBSCAN-based ROI extracted from SAR images and the discrimination of multi-feature ROI

    NASA Astrophysics Data System (ADS)

    He, Xin Yi; Zhao, Bo; Tan, Shu Run; Zhou, Xiao Yang; Jiang, Zhong Jin; Cui, Tie Jun

    2009-10-01

    The purpose of the paper is to extract the region of interest (ROI) from the coarse detected synthetic aperture radar (SAR) images and discriminate if the ROI contains a target or not, so as to eliminate the false alarm, and prepare for the target recognition. The automatic target clustering is one of the most difficult tasks in the SAR-image automatic target recognition system. The density-based spatial clustering of applications with noise (DBSCAN) relies on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN was first used in the SAR image processing, which has many excellent features: only two insensitivity parameters (radius of neighborhood and minimum number of points) are needed; clusters of arbitrary shapes which fit in with the coarse detected SAR images can be discovered; and the calculation time and memory can be reduced. In the multi-feature ROI discrimination scheme, we extract several target features which contain the geometry features such as the area discriminator and Radon-transform based target profile discriminator, the distribution characteristics such as the EFF discriminator, and the EM scattering property such as the PPR discriminator. The synthesized judgment effectively eliminates the false alarms.

  7. Feature extraction from 3D lidar point clouds using image processing methods

    NASA Astrophysics Data System (ADS)

    Zhu, Ling; Shortridge, Ashton; Lusch, David; Shi, Ruoming

    2011-10-01

    Airborne LiDAR data have become cost-effective to produce at local and regional scales across the United States and internationally. These data are typically collected and processed into surface data products by contractors for state and local communities. Current algorithms for advanced processing of LiDAR point cloud data are normally implemented in specialized, expensive software that is not available for many users, and these users are therefore unable to experiment with the LiDAR point cloud data directly for extracting desired feature classes. The objective of this research is to identify and assess automated, readily implementable GIS procedures to extract features like buildings, vegetated areas, parking lots and roads from LiDAR data using standard image processing tools, as such tools are relatively mature with many effective classification methods. The final procedure adopted employs four distinct stages. First, interpolation is used to transfer the 3D points to a high-resolution raster. Raster grids of both height and intensity are generated. Second, multiple raster maps - a normalized surface model (nDSM), difference of returns, slope, and the LiDAR intensity map - are conflated to generate a multi-channel image. Third, a feature space of this image is created. Finally, supervised classification on the feature space is implemented. The approach is demonstrated in both a conceptual model and on a complex real-world case study, and its strengths and limitations are addressed.

  8. EEG artifact elimination by extraction of ICA-component features using image processing algorithms.

    PubMed

    Radüntz, T; Scouten, J; Hochmuth, O; Meffert, B

    2015-03-30

    Artifact rejection is a central issue when dealing with electroencephalogram recordings. Although independent component analysis (ICA) separates data in linearly independent components (IC), the classification of these components as artifact or EEG signal still requires visual inspection by experts. In this paper, we achieve automated artifact elimination using linear discriminant analysis (LDA) for classification of feature vectors extracted from ICA components via image processing algorithms. We compare the performance of this automated classifier to visual classification by experts and identify range filtering as a feature extraction method with great potential for automated IC artifact recognition (accuracy rate 88%). We obtain almost the same level of recognition performance for geometric features and local binary pattern (LBP) features. Compared to the existing automated solutions the proposed method has two main advantages: First, it does not depend on direct recording of artifact signals, which then, e.g. have to be subtracted from the contaminated EEG. Second, it is not limited to a specific number or type of artifact. In summary, the present method is an automatic, reliable, real-time capable and practical tool that reduces the time intensive manual selection of ICs for artifact removal. The results are very promising despite the relatively small channel resolution of 25 electrodes.

  9. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  10. Extraction of features from sleep EEG for Bayesian assessment of brain development

    PubMed Central

    2017-01-01

    Brain development can be evaluated by experts analysing age-related patterns in sleep electroencephalograms (EEG). Natural variations in the patterns, noise, and artefacts affect the evaluation accuracy as well as experts’ agreement. The knowledge of predictive posterior distribution allows experts to estimate confidence intervals within which decisions are distributed. Bayesian approach to probabilistic inference has provided accurate estimates of intervals of interest. In this paper we propose a new feature extraction technique for Bayesian assessment and estimation of predictive distribution in a case of newborn brain development assessment. The new EEG features are verified within the Bayesian framework on a large EEG data set including 1,100 recordings made from newborns in 10 age groups. The proposed features are highly correlated with brain maturation and their use increases the assessment accuracy. PMID:28323852

  11. Handwritten Chinese character recognition based on supervised competitive learning neural network and block-based relative fuzzy feature extraction

    NASA Astrophysics Data System (ADS)

    Sun, Limin; Wu, Shuanhu

    2005-02-01

    Offline handwritten chinese character recognition is still a difficult problem because of its large stroke changes, writing anomaly, and the difficulty for obtaining its stroke ranking information. Generally, offline handwritten chinese character can be divided into two procedures: feature extraction for capturing handwritten chinese character information and feature classifying for character recognition. In this paper, we proposed a new Chinese character recognition algorithm. In feature extraction part, we adopted elastic mesh dividing method for extracting the block features and its relative fuzzy features that utilized the relativities between different strokes and distribution probability of a stroke in its neighbor sub-blocks. In recognition part, we constructed a classifier based on a supervised competitive learning algorithm to train competitive learning neural network with the extracted features set. Experimental results show that the performance of our algorithm is encouraging and can be comparable to other algorithms.

  12. Concordance of computer-extracted image features with BI-RADS descriptors for mammographic mass margin

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Paramagul, Chintana; Nees, Alexis; Helvie, Mark; Shi, Jiazheng

    2008-03-01

    The purpose of this study was to develop and evaluate computer-extracted features for characterizing mammographic mass margins according to BI-RADS spiculated and circumscribed categories. The mass was automatically segmented using an active contour model. A spiculation measure for a pixel on the mass boundary was defined by using the angular difference between the image gradient vector and the normal to the mass, averaged over pixels in a spiculation search region. For the circumscribed margin feature, the angular difference between the principal eigenvector of the Hessian matrix and the normal to the mass was estimated in a band of pixels centered at each point on the boundary, and the feature was extracted from the resulting profile along the boundary. Three MQSA radiologists provided BI-RADS margin ratings for a data set of 198 regions of interest containing breast masses. The features were evaluated with respect to the individual radiologists' characterization using receiver operating characteristic (ROC) analysis, as well as with respect to that from the majority rule, in which a mass was labeled as spiculated (circumscribed) if it was characterized as such by 2 or 3 radiologists, and non-spiculated (non-circumscribed) otherwise. We also investigated the performance of the features for consensus masses, defined as those labeled as spiculated (circumscribed) or nonspiculated (non-circumscribed) by all three radiologists. When masses were labeled according to radiologists R1, R2, and R3 individually, the spiculation feature had an area A z under the ROC curve of 0.90+/-0.04, 0.90+/-0.03, 0.88+/-0.03, respectively, while the circumscribed margin feature had an A z value of 0.77+/-0.04, 0.74+/-0.04, and 0.80+/-0.03, respectively. When masses were labeled according to the majority rule, the A z values for the spiculation and the circumscribed margin features were 0.92+/-0.03 and 0.80+/-+/-0.03, respectively. When only the consensus masses were considered, the A z

  13. Extraction of spatial features in hyperspectral images based on the analysis of differential attribute profiles

    NASA Astrophysics Data System (ADS)

    Falco, Nicola; Benediktsson, Jon A.; Bruzzone, Lorenzo

    2013-10-01

    The new generation of hyperspectral sensors can provide images with a high spectral and spatial resolution. Recent improvements in mathematical morphology have developed new techniques such as the Attribute Profiles (APs) and the Extended Attribute Profiles (EAPs) that can effectively model the spatial information in remote sensing images. The main drawbacks of these techniques is the selection of the optimal range of values related to the family of criteria adopted to each filter step, and the high dimensionality of the profiles, which results in a very large number of features and therefore provoking the Hughes phenomenon. In this work, we focus on addressing the dimensionality issue, which leads to an highly intrinsic information redundancy, proposing a novel strategy for extracting spatial information from hyperspectral images based on the analysis of the Differential Attribute Profiles (DAPs). A DAP is generated by computing the derivative of the AP; it shows at each level the residual between two adjacent levels of the AP. By analyzing the multilevel behavior of the DAP, it is possible to extract geometrical features corresponding to the structures within the scene at different scales. Our proposed approach consists of two steps: 1) a homogeneity measurement is used to identify the level L in which a given pixel belongs to a region with a physical meaning; 2) the geometrical information of the extracted regions is fused into a single map considering their level L previously identified. The process is repeated for different attributes building a reduced EAP, whose dimensionality is much lower with respect to the original EAP ones. Experiments carried out on the hyperspectral data set of Pavia University area show the effectiveness of the proposed method in extracting spatial features related to the physical structures presented in the scene, achieving higher classification accuracy with respect to the ones reported in the state-of-the-art literature

  14. Fault feature extraction and enhancement of rolling element bearing in varying speed condition

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Zhang, W.; Qin, Z. Y.; Chu, F. L.

    2016-08-01

    In engineering applications, the variability of load usually varies the shaft speed, which further degrades the efficacy of the diagnostic method based on the hypothesis of constant speed analysis. Therefore, the investigation of the diagnostic method suitable for the varying speed condition is significant for the bearing fault diagnosis. In this instance, a novel fault feature extraction and enhancement procedure was proposed by the combination of the iterative envelope analysis and a low pass filtering operation in this paper. At first, based on the analytical model of the collected vibration signal, the envelope signal was theoretically calculated and the iterative envelope analysis was improved for the varying speed condition. Then, a feature enhancement procedure was performed by applying a low pass filter on the temporal envelope obtained by the iterative envelope analysis. Finally, the temporal envelope signal was transformed to the angular domain by the computed order tracking and the fault feature was extracted on the squared envelope spectrum. Simulations and experiments were used to validate the efficacy of the theoretical analysis and proposed procedure. It is shown that the computed order tracking method is recommended to be applied on the envelope of the signal in order to avoid the energy spreading and amplitude distortion. Compared with the feature enhancement method performed by the fast kurtogram and corresponding optimal band pass filtering, the proposed method can efficiently extract the fault character in the varying speed condition with less amplitude attenuation. Furthermore, do not involve the center frequency estimation, the proposed method is more concise for engineering applications.

  15. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  16. Speech recognition in reverberant and noisy environments employing multiple feature extractors and i-vector speaker adaptation

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Gupta, Vishwa; Kenny, Patrick; Dumouchel, Pierre

    2015-12-01

    The REVERB challenge provides a common framework for the evaluation of feature extraction techniques in the presence of both reverberation and additive background noise. State-of-the-art speech recognition systems perform well in controlled environments, but their performance degrades in realistic acoustical conditions, especially in real as well as simulated reverberant environments. In this contribution, we utilize multiple feature extractors including the conventional mel-filterbank, multi-taper spectrum estimation-based mel-filterbank, robust mel and compressive gammachirp filterbank, iterative deconvolution-based dereverberated mel-filterbank, and maximum likelihood inverse filtering-based dereverberated mel-frequency cepstral coefficient features for speech recognition with multi-condition training data. In order to improve speech recognition performance, we combine their results using ROVER (Recognizer Output Voting Error Reduction). For two- and eight-channel tasks, to get benefited from the multi-channel data, we also use ROVER, instead of the multi-microphone signal processing method, to reduce word error rate by selecting the best scoring word at each channel. As in a previous work, we also apply i-vector-based speaker adaptation which was found effective. In speech recognition task, speaker adaptation tries to reduce mismatch between the training and test speakers. Speech recognition experiments are conducted on the REVERB challenge 2014 corpora using the Kaldi recognizer. In our experiments, we use both utterance-based batch processing and full batch processing. In the single-channel task, full batch processing reduced word error rate (WER) from 10.0 to 9.3 % on SimData as compared to utterance-based batch processing. Using full batch processing, we obtained an average WER of 9.0 and 23.4 % on the SimData and RealData, respectively, for the two-channel task, whereas for the eight-channel task on the SimData and RealData, the average WERs found were 8

  17. Feature extraction techniques using multivariate analysis for identification of lung cancer volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Thriumani, Reena; Zakaria, Ammar; Hashim, Yumi Zuhanis Has-Yun; Helmy, Khaled Mohamed; Omar, Mohammad Iqbal; Jeffree, Amanina; Adom, Abdul Hamid; Shakaff, Ali Yeon Md; Kamarudin, Latifah Munirah

    2017-03-01

    In this experiment, three different cell cultures (A549, WI38VA13 and MCF7) and blank medium (without cells) as a control were used. The electronic nose (E-Nose) was used to sniff the headspace of cultured cells and the data were recorded. After data pre-processing, two different features were extracted by taking into consideration of both steady state and the transient information. The extracted data are then being processed by multivariate analysis, Linear Discriminant Analysis (LDA) to provide visualization of the clustering vector information in multi-sensor space. The Probabilistic Neural Network (PNN) classifier was used to test the performance of the E-Nose on determining the volatile organic compounds (VOCs) of lung cancer cell line. The LDA data projection was able to differentiate between the lung cancer cell samples and other samples (breast cancer, normal cell and blank medium) effectively. The features extracted from the steady state response reached 100% of classification rate while the transient response with the aid of LDA dimension reduction methods produced 100% classification performance using PNN classifier with a spread value of 0.1. The results also show that E-Nose application is a promising technique to be applied to real patients in further work and the aid of Multivariate Analysis; it is able to be the alternative to the current lung cancer diagnostic methods.

  18. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  19. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  20. Unique Features of Fish Immune Repertoires: Particularities of Adaptive Immunity Within the Largest Group of Vertebrates.

    PubMed

    Magadan, Susana; Sunyer, Oriol J; Boudinot, Pierre

    2015-01-01

    Fishes (i.e., teleost fishes) are the largest group of vertebrates. Although their immune system is based on the fundamental receptors, pathways, and cell types found in all groups of vertebrates, fishes show a diversity of particular features that challenge some classical concepts of immunology. In this chapter, we discuss the particularities of fish immune repertoires from a comparative perspective. We examine how allelic exclusion can be achieved when multiple Ig loci are present, how isotypic diversity and functional specificity impact clonal complexity, how loss of the MHC class II molecules affects the cooperation between T and B cells, and how deep sequencing technologies bring new insights about somatic hypermutation in the absence of germinal centers. The unique coexistence of two distinct B-cell lineages respectively specialized in systemic and mucosal responses is also discussed. Finally, we try to show that the diverse adaptations of immune repertoires in teleosts can help in understanding how somatic adaptive mechanisms of immunity evolved in parallel in different lineages across vertebrates.

  1. Unique Features of Fish Immune Repertoires: Particularities of Adaptive Immunity Within the Largest Group of Vertebrates

    PubMed Central

    Sunyer, Oriol J.

    2016-01-01

    Fishes (i.e., teleost fishes) are the largest group of vertebrates. Although their immune system is based on the fundamental receptors, pathways, and cell types found in all groups of vertebrates, fishes show a diversity of particular features that challenge some classical concepts of immunology. In this chapter, we discuss the particularities of fish immune repertoires from a comparative perspective. We examine how allelic exclusion can be achieved when multiple Ig loci are present, how isotypic diversity and functional specificity impact clonal complexity, how loss of the MHC class II molecules affects the cooperation between T and B cells, and how deep sequencing technologies bring new insights about somatic hypermutation in the absence of germinal centers. The unique coexistence of two distinct B-cell lineages respectively specialized in systemic and mucosal responses is also discussed. Finally, we try to show that the diverse adaptations of immune repertoires in teleosts can help in understanding how somatic adaptive mechanisms of immunity evolved in parallel in different lineages across vertebrates. PMID:26537384

  2. Solid waste bin level detection using gray level co-occurrence matrix feature extraction approach.

    PubMed

    Arebey, Maher; Hannan, M A; Begum, R A; Basri, Hassan

    2012-08-15

    This paper presents solid waste bin level detection and classification using gray level co-occurrence matrix (GLCM) feature extraction methods. GLCM parameters, such as displacement, d, quantization, G, and the number of textural features, are investigated to determine the best parameter values of the bin images. The parameter values and number of texture features are used to form the GLCM database. The most appropriate features collected from the GLCM are then used as inputs to the multi-layer perceptron (MLP) and the K-nearest neighbor (KNN) classifiers for bin image classification and grading. The classification and grading performance for DB1, DB2 and DB3 features were selected with both MLP and KNN classifiers. The results demonstrated that the KNN classifier, at KNN = 3, d = 1 and maximum G values, performs better than using the MLP classifier with the same database. Based on the results, this method has the potential to be used in solid waste bin level classification and grading to provide a robust solution for solid waste bin level detection, monitoring and management.

  3. Detecting abnormality in optic nerve head images using a feature extraction analysis.

    PubMed

    Zhu, Haogang; Poostchi, Ali; Vernon, Stephen A; Crabb, David P

    2014-07-01

    Imaging and evaluation of the optic nerve head (ONH) plays an essential part in the detection and clinical management of glaucoma. The morphological characteristics of ONHs vary greatly from person to person and this variability means it is difficult to quantify them in a standardized way. We developed and evaluated a feature extraction approach using shift-invariant wavelet packet and kernel principal component analysis to quantify the shape features in ONH images acquired by scanning laser ophthalmoscopy (Heidelberg Retina Tomograph [HRT]). The methods were developed and tested on 1996 eyes from three different clinical centers. A shape abnormality score (SAS) was developed from extracted features using a Gaussian process to identify glaucomatous abnormality. SAS can be used as a diagnostic index to quantify the overall likelihood of ONH abnormality. Maps showing areas of likely abnormality within the ONH were also derived. Diagnostic performance of the technique, as estimated by ROC analysis, was significantly better than the classification tools currently used in the HRT software - the technique offers the additional advantage of working with all images and is fully automated.

  4. Lumbar Ultrasound Image Feature Extraction and Classification with Support Vector Machine.

    PubMed

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2015-10-01

    Needle entry site localization remains a challenge for procedures that involve lumbar puncture, for example, epidural anesthesia. To solve the problem, we have developed an image classification algorithm that can automatically identify the bone/interspinous region for ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. The proposed algorithm consists of feature extraction, feature selection and machine learning procedures. A set of features, including matching values, positions and the appearance of black pixels within pre-defined windows along the midline, were extracted from the ultrasound images using template matching and midline detection methods. A support vector machine was then used to classify the bone images and interspinous images. The support vector machine model was trained with 1,040 images from 26 pregnant subjects and tested on 800 images from a separate set of 20 pregnant patients. A success rate of 95.0% on training set and 93.2% on test set was achieved with the proposed method. The trained support vector machine model was further tested on 46 off-line collected videos, and successfully identified the proper needle insertion site (interspinous region) in 45 of the cases. Therefore, the proposed method is able to process the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work of identifying the needle entry site.

  5. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  6. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    NASA Astrophysics Data System (ADS)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  7. Consistent Feature Extraction From Vector Fields: Combinatorial Representations and Analysis Under Local Reference Frames

    SciTech Connect

    Bhatia, Harsh

    2015-05-01

    This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields—an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single “correct” reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of “correctness” of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (timeindependent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty

  8. Image copy-move forgery detection based on sped-up robust features descriptor and adaptive minimal-maximal suppression

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Sun, Xingming; Xin, Xiangyang; Hu, Weifeng; Wu, Youxin

    2015-11-01

    Region duplication is a simple and effective operation to create digital image forgeries, where a continuous portion of pixels in an image is copied and pasted to a different location in the same image. Many prior copy-move forgery detection methods suffer from their inability to detect the duplicated region, which is subjected to various geometric transformations. A keypoint-based approach is proposed to detect the copy-move forgery in an image. Our method starts by extracting the keypoints through a fast Hessian detector. Then the adaptive minimal-maximal suppression (AMMS) strategy is developed for distributing the keypoints evenly throughout an image. By using AMMS and a sped-up robust feature descriptor, the proposed method is able to deal with the problem of insufficient keypoints in the almost uniform area. Finally, the geometric transformation performed in cloning is recovered by using the maximum likelihood estimation of the homography. Experimental results show the efficacy of this technique in detecting copy-move forgeries and estimating the geometric transformation parameters. Compared with the state of the art, our approach obtains a higher true positive rate and a lower false positive rate.

  9. Robo-Psychophysics: Extracting Behaviorally Relevant Features from the Output of Sensors on a Prosthetic Finger.

    PubMed

    Delhaye, Benoit P; Schluter, Erik W; Bensmaia, Sliman J

    2016-01-01

    Efforts are underway to restore sensorimotor function in amputees and tetraplegic patients using anthropomorphic robotic hands. For this approach to be clinically viable, sensory signals from the hand must be relayed back to the patient. To convey tactile feedback necessary for object manipulation, behaviorally relevant information must be extracted in real time from the output of sensors on the prosthesis. In the present study, we recorded the sensor output from a state-of-the-art bionic finger during the presentation of different tactile stimuli, including punctate indentations and scanned textures. Furthermore, the parameters of stimulus delivery (location, speed, direction, indentation depth, and surface texture) were systematically varied. We developed simple decoders to extract behaviorally relevant variables from the sensor output and assessed the degree to which these algorithms could reliably extract these different types of sensory information across different conditions of stimulus delivery. We then compared the performance of the decoders to that of humans in analogous psychophysical experiments. We show that straightforward decoders can extract behaviorally relevant features accurately from the sensor output and most of them outperform humans.

  10. Adaptive descriptor based on the geometric consistency of local image features: application to flower image classification

    NASA Astrophysics Data System (ADS)

    Najjar, Asma; Zagrouba, Ezzeddine

    2016-09-01

    Geometric consistency is, usually, considered as a postprocessing step to filter matched sets of local features in order to discard outliers. In this work, it is used to propose an adaptive feature that describes the geometric dispersion of keypoints. It is based on a distribution computed by a nonparametric estimator so that no assumption is made about the data. We investigated and discussed the invariance properties of our descriptor under the most common two- and three-dimensional transformations. Then, we applied it to flower recognition. The classification is performed using the precomputed kernel of support vector machines classifier. Indeed, a similarity computing framework that uses the Kullback-Leibler divergence is presented. Furthermore, a customized layout for each flower image is designed to describe and compare separately the boundary and the central area of flowers. Experimentations made on the Oxford flower-17 dataset prove the efficiency of our method in terms of classification accuracy and computational complexity. The limits of our descriptor are also discussed on a 10-class subset of the Oxford flower-102 dataset.

  11. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  12. SHERPA: an image segmentation and outline feature extraction tool for diatoms and other objects

    PubMed Central

    2014-01-01

    Background Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and/or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and

  13. The tactile speed aftereffect depends on the speed of adapting motion across the skin rather than other spatiotemporal features.

    PubMed

    McIntyre, Sarah; Seizova-Cajic, Tatjana; Holcombe, Alex O

    2016-03-01

    After prolonged exposure to a surface moving across the skin, this felt movement appears slower, a phenomenon known as the tactile speed aftereffect (tSAE). We asked which feature of the adapting motion drives the tSAE: speed, the spacing between texture elements, or the frequency with which they cross the skin. After adapting to a ridged moving surface with one hand, participants compared the speed of test stimuli on adapted and unadapted hands. We used surfaces with different spatial periods (SPs; 3, 6, 12 mm) that produced adapting motion with different combinations of adapting speed (20, 40, 80 mm/s) and temporal frequency (TF; 3.4, 6.7, 13.4 ridges/s). The primary determinant of tSAE magnitude was speed of the adapting motion, not SP or TF. This suggests that adaptation occurs centrally, after speed has been computed from SP and TF, and/or that it reflects a speed cue independent of those features in the first place (e.g., indentation force). In a second experiment, we investigated the properties of the neural code for speed. Speed tuning predicts that adaptation should be greatest for speeds at or near the adapting speed. However, the tSAE was always stronger when the adapting stimulus was faster (242 mm/s) than the test (30-143 mm/s) compared with when the adapting and test speeds were matched. These results give no indication of speed tuning and instead suggest that adaptation occurs at a level where an intensive code dominates. In an intensive code, the faster the stimulus, the more the neurons fire.

  14. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    SciTech Connect

    Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang

    2014-11-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  15. Uranium extremophily is an adaptive, rather than intrinsic, feature for extremely thermoacidophilic Metallosphaera species

    PubMed Central

    Mukherjee, Arpan; Wheaton, Garrett H.; Blum, Paul H.; Kelly, Robert M.

    2012-01-01

    Thermoacidophilic archaea are found in heavy metal-rich environments, and, in some cases, these microorganisms are causative agents of metal mobilization through cellular processes related to their bioenergetics. Given the nature of their habitats, these microorganisms must deal with the potentially toxic effect of heavy metals. Here, we show that two thermoacidophilic Metallosphaera species with nearly identical (99.99%) genomes differed significantly in their sensitivity and reactivity to uranium (U). Metallosphaera prunae, isolated from a smoldering heap on a uranium mine in Thüringen, Germany, could be viewed as a “spontaneous mutant” of Metallosphaera sedula, an isolate from Pisciarelli Solfatara near Naples. Metallosphaera prunae tolerated triuranium octaoxide (U3O8) and soluble uranium [U(VI)] to a much greater extent than M. sedula. Within 15 min following exposure to “U(VI) shock,” M. sedula, and not M. prunae, exhibited transcriptomic features associated with severe stress response. Furthermore, within 15 min post-U(VI) shock, M. prunae, and not M. sedula, showed evidence of substantial degradation of cellular RNA, suggesting that transcriptional and translational processes were aborted as a dynamic mechanism for resisting U toxicity; by 60 min post-U(VI) shock, RNA integrity in M. prunae recovered, and known modes for heavy metal resistance were activated. In addition, M. sedula rapidly oxidized solid U3O8 to soluble U(VI) for bioenergetic purposes, a chemolithoautotrophic feature not previously reported. M. prunae, however, did not solubilize solid U3O8 to any significant extent, thereby not exacerbating U(VI) toxicity. These results point to uranium extremophily as an adaptive, rather than intrinsic, feature for Metallosphaera species, driven by environmental factors. PMID:23010932

  16. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  17. Fault detection and classification in chemical processes based on neural networks with feature extraction.

    PubMed

    Zhou, Yifeng; Hahn, Juergen; Mannan, M Sam

    2003-10-01

    Feed forward neural networks are investigated here for fault diagnosis in chemical processes, especially batch processes. The use of the neural model prediction error as the residual for fault diagnosis of sensor and component is analyzed. To reduce the training time required for the neural process model, an input feature extraction process for the neural model is implemented. An additional radial basis function neural classifier is developed to isolate faults from the residual generated, and results are presented to demonstrate the satisfactory detection and isolation of faults using this approach.

  18. Image feature extraction with various wavelet functions in a photorefractive joint transform correlator

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Cartwright, C. M.; Ding, M. S.; Gillespie, W. A.

    2000-11-01

    The wavelet transform has found a lot of uses in the field of optics. We present an experimental realization of employing variant wavelet filters into the object space of a photorefractive joint transform correlator to realize image feature extraction. The Haar's wavelet, Roberts gradient and Mexican-hat wavelet are employed in the experiment. Because of its good optical properties, the photorefractive crystal Bi 12SiO 20 is used as the dynamic holographic medium in the Fourier plane. Both scene and reference have been detour-phase encoded in a liquid crystal television in the input plane. Computer simulations, experimental results and analysis are presented.

  19. Feature extraction from time domain acoustic signatures of weapons systems fire

    NASA Astrophysics Data System (ADS)

    Yang, Christine; Goldman, Geoffrey H.

    2014-06-01

    The U.S. Army is interested in developing algorithms to classify weapons systems fire based on their acoustic signatures. To support this effort, an algorithm was developed to extract features from acoustic signatures of weapons systems fire and applied to over 1300 signatures. The algorithm filtered the data using standard techniques then estimated the amplitude and time of the first five peaks and troughs and the location of the zero crossing in the waveform. The results were stored in Excel spreadsheets. The results are being used to develop and test acoustic classifier algorithms.

  20. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  1. Signal feature extraction by multi-scale PCA and its application to respiratory sound classification.

    PubMed

    Xie, Shengkun; Jin, Feng; Krishnan, Sridhar; Sattar, Farook

    2012-07-01

    Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds. Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused in multi-scale analysis. This paper proposes a new signal classification scheme for various types of RS based on multi-scale principal component analysis as a signal enhancement and feature extraction method to capture major variability of Fourier power spectra of the signal. Since we classify RS signals in a high dimensional feature subspace, a new classification method, called empirical classification, is developed for further signal dimension reduction in the classification step and has been shown to be more robust and outperform other simple classifiers. An overall accuracy of 98.34% for the classification of 689 real RS recording segments shows the promising performance of the presented method.

  2. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  3. Feature extraction and band selection methods for hyperspectral imagery applied for identifying defects

    NASA Astrophysics Data System (ADS)

    Cheng, Xuemei; Yang, Tao; Chen, Yud-Ren; Chen, Xin

    2005-11-01

    An important task in hyperspectral data processing is to reduce the redundancy of the spectral and spatial information without losing any valuable details that are needed for the subsequent detection, discrimination and classification processes. Band selection and combination not only serves as the first step of hyperspectral data processing that leads to a significant decrease in computational complexity in the successive procedures, but also a research tool for determining optimal spectra requirements for different online applications. In order to uniquely characterize the materials of interest, band selection criteria for optimal band was defined. An integrated PCA and Fisher linear discriminant (FLD) method has been developed based on the criteria that used for hyperspectral feature band selection and combination. This method has been compared with other feature extraction and selection methods when applied to detect apple defects, and the performance of each method was evaluated and compared based on the detection results.

  4. Adaptive iteration method for star centroid extraction under highly dynamic conditions

    NASA Astrophysics Data System (ADS)

    Gao, Yushan; Qin, Shiqiao; Wang, Xingshu

    2016-10-01

    Star centroiding accuracy decreases significantly when star sensor works under highly dynamic conditions or star images are corrupted by severe noise, reducing the output attitude precision. Herein, an adaptive iteration method is proposed to solve this problem. Firstly, initial star centroids are predicted by traditional method, and then based on initial reported star centroids and angular velocities of the star sensor, adaptive centroiding windows are generated to cover the star area and then an iterative method optimizing the location of centroiding window is used to obtain the final star spot extraction results. Simulation results shows that, compared with traditional star image restoration method and Iteratively Weighted Center of Gravity method, AWI algorithm maintains higher extraction accuracy when rotation velocities or noise level increases.

  5. A fuzzy rule base system for object-based feature extraction and classification

    NASA Astrophysics Data System (ADS)

    Jin, Xiaoying; Paswaters, Scott

    2007-04-01

    In this paper, we present a fuzzy rule base system for object-based feature extraction and classification on remote sensing imagery. First, the object primitives are generated from the segmentation step. Object primitives are defined as individual regions with a set of attributes computed on the regions. The attributes computed include spectral, texture and shape measurements. Crisp rules are very intuitive to the users. They are usually represented as "GT (greater than)", "LT (less than)" and "IB (In Between)" with numerical values. The features can be manually generated by querying on the attributes using these crisp rules and monitoring the resulting selected object primitives. However, the attributes of different features are usually overlapping. The information is inexact and not suitable for traditional digital on/off decisions. Here a fuzzy rule base system is built to better model the uncertainty inherent in the data and vague human knowledge. Rather than representing attributes in linguistic terms like "Small", "Medium", "Large", we proposed a new method for automatic fuzzification of the traditional crisp concepts "GT", "LT" and "IB". Two sets of membership functions are defined to model those concepts. One is based on the piecewise linear functions, the other is based on S-type membership functions. A novel concept "fuzzy tolerance" is proposed to control the degree of fuzziness of each rule. The experimental results on classification and extracting features such as water, buildings, trees, fields and urban areas have shown that this newly designed fuzzy rule base system is intuitive and allows users to easily generate fuzzy rules.

  6. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  7. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  8. Learning object location predictors with boosting and grammar-guided feature extraction

    SciTech Connect

    Eads, Damian Ryan; Rosten, Edward; Helmbold, David

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  9. Surface roughness extraction based on Markov random field model in wavelet feature domain

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Lei, Li-qiao

    2014-12-01

    Based on the computer texture analysis method, a new noncontact surface roughness measurement technique is proposed. The method is inspired by the nonredundant directional selectivity and highly discriminative nature of the wavelet representation and the capability of the Markov random field (MRF) model to capture statistical regularities. Surface roughness information contained in the texture features may be extracted based on an MRF stochastic model of textures in the wavelet feature domain. The model captures significant intrascale and interscale statistical dependencies between wavelet coefficients. To investigate the relationship between the texture features and surface roughness Ra, a simple research setup, which consists of a charge-coupled diode camera without a lens and a diode laser, was established, and the laser speckle texture patterns are acquired from some standard grinding surfaces. The research results have illustrated that surface roughness Ra has a good monotonic relationship with the texture features of the laser speckle pattern. If this measuring system is calibrated with the surface standard samples roughness beforehand, the surface roughness actual value Ra can be deduced in the case of the same material surfaces ground at the same manufacture conditions.

  10. Lamb wave feature extraction using discrete wavelet transformation and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Ghodsi, Mojtaba; Ziaiefar, Hamidreza; Amiryan, Milad; Honarvar, Farhang; Hojjat, Yousef; Mahmoudi, Mehdi; Al-Yahmadi, Amur; Bahadur, Issam

    2016-04-01

    In this research, a new method is presented for eliciting the proper features for recognizing and classifying the kinds of the defects by guided ultrasonic waves. After applying suitable preprocessing, the suggested method extracts the base frequency band from the received signals by discrete wavelet transform and discrete Fourier transform. This frequency band can be used as a distinctive feature of ultrasonic signals in different defects. Principal Component Analysis with improving this feature and decreasing extra data managed to improve classification. In this study, ultrasonic test with A0 mode lamb wave is used and is appropriated to reduce the difficulties around the problem. The defects under analysis included corrosion, crack and local thickness reduction. The last defect is caused by electro discharge machining (EDM). The results of the classification by optimized Neural Network depicts that the presented method can differentiate different defects with 95% precision and thus, it is a strong and efficient method. Moreover, comparing the elicited features for corrosion and local thickness reduction and also the results of the two's classification clarifies that modeling the corrosion procedure by local thickness reduction which was previously common, is not an appropriate method and the signals received from the two defects are different from each other.

  11. Extraction of medically interpretable features for classification of malignancy in breast thermography.

    PubMed

    Madhu, Himanshu; Kakileti, Siva Teja; Venkataramani, Krithika; Jabbireddy, Susmija

    2016-08-01

    Thermography, with high-resolution cameras, is being re-investigated as a possible breast cancer screening imaging modality, as it does not have the harmful radiation effects of mammography. This paper focuses on automatic extraction of medically interpretable non-vascular thermal features. We design these features to differentiate malignancy from different non-malignancy conditions, including hormone sensitive tissues and certain benign conditions, which have an increased thermal response. These features increase the specificity for breast cancer screening, which had been a long known problem in thermographic screening, while retaining high sensitivity. These features are also agnostic to different cameras and resolutions (up to an extent). On a dataset of around 78 subjects with cancer and 187 subjects without cancer, that have some benign diseases and conditions with thermal responses, we are able to get around 99% specificity while having 100% sensitivity. This indicates a potential break-through in thermographic screening for breast cancer. This shows promise for undertaking a comparison to mammography with larger numbers of subjects with more data variations.

  12. Investigation of Methods for Extracting Features Related to Motor Imagery and Resting States in EEG-Based BCI System

    NASA Astrophysics Data System (ADS)

    Susila, I. Putu; Kanoh, Shin'ichiro; Miyamoto, Ko-Ichiro; Yoshinobu, Tatsuo

    Methods for extracting features of motor imagery from 1-channel bipolar EEG were evaluated. The EEG power spectrums which were used as feature vectors were calculated with filter bank, FFT and AR model, and were then classified by linear discriminant analysis (LDA) to discriminate motor imagery and resting states. It was shown that the extraction method using AR model gave the best result with the average true positive rate of 83% (σ = 7%). Furthermore, when principal component analysis (PCA) was applied to the feature vectors, the dimension of the feature vectors could be reduced without decreasing accuracy of discrimination.

  13. Multiple feature extraction and classification of electroencephalograph signal for Alzheimers' with spectrum and bispectrum

    NASA Astrophysics Data System (ADS)

    Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile

    2015-01-01

    In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of

  14. A feature extraction method based on information theory for fault diagnosis of reciprocating machinery.

    PubMed

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to.

  15. A Feature Extraction Method for Vibration Signal of Bearing Incipient Degradation

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli; Guo, Liang; Li, Dan; Wen, Juan

    2016-06-01

    Detection of incipient degradation demands extracting sensitive features accurately when signal-to-noise ratio (SNR) is very poor, which appears in most industrial environments. Vibration signals of rolling bearings are widely used for bearing fault diagnosis. In this paper, we propose a feature extraction method that combines Blind Source Separation (BSS) and Spectral Kurtosis (SK) to separate independent noise sources. Normal, and incipient fault signals from vibration tests of rolling bearings are processed. We studied 16 groups of vibration signals (which all display an increase in kurtosis) of incipient degradation after they are processed by a BSS filter. Compared with conventional kurtosis, theoretical studies of SK trends show that the SK levels vary with frequencies and some experimental studies show that SK trends of measured vibration signals of bearings vary with the amount and level of impulses in both vibration and noise signals due to bearing faults. It is found that the peak values of SK increase when vibration signals of incipient faults are processed by a BSS filter. This pre-processing by a BSS filter makes SK more sensitive to impulses caused by performance degradation of bearings.

  16. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    PubMed Central

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  17. Memory-efficient architecture for hysteresis thresholding and object feature extraction.

    PubMed

    Najjar, Mayssaa A; Karlapudi, Swetha; Bayoumi, Magdy A

    2011-12-01

    Hysteresis thresholding is a method that offers enhanced object detection. Due to its recursive nature, it is time consuming and requires a lot of memory resources. This makes it avoided in streaming processors with limited memory. We propose two versions of a memory-efficient and fast architecture for hysteresis thresholding: a high-accuracy pixel-based architecture and a faster block-based one at the expense of some loss in the accuracy. Both designs couple thresholding with connected component analysis and feature extraction in a single pass over the image. Unlike queue-based techniques, the proposed scheme treats candidate pixels almost as foreground until objects complete; a decision is then made to keep or discard these pixels. This allows processing on the fly, thus avoiding additional passes for handling candidate pixels and extracting object features. Moreover, labels are reused so only one row of compact labels is buffered. Both architectures are implemented in MATLAB and VHDL. Simulation results on a set of real and synthetic images show that the execution speed can attain an average increase up to 24× for the pixel-based and 52× for the block-based when compared to state-of-the-art techniques. The memory requirements are also drastically reduced by about 99%.

  18. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    PubMed Central

    Iannelli, Gianni Cristian; Lisini, Gianni; Dell'Acqua, Fabio; Feitosa, Raul Queiroz; da Costa, Gilson Alexandre Ostwald Pedro; Gamba, Paolo

    2014-01-01

    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. PMID:25271564

  19. Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction

    NASA Astrophysics Data System (ADS)

    Ding, Xiaoxi; He, Qingbo

    2016-12-01

    In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.

  20. Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors.

    PubMed

    Peng, Xi; Zhao, Bo; Yan, Rui; Tang, Huajin; Yi, Zhang

    2016-03-18

    Address event representation (AER) image sensors represent the visual information as a sequence of events that denotes the luminance changes of the scene. In this paper, we introduce a feature extraction method for AER image sensors based on the probability theory, namely, bag of events (BOE). The proposed approach represents each object as the joint probability distribution of the concurrent events, and each event corresponds to a unique activated pixel of the AER sensor. The advantages of BOE include: 1) it is a statistical learning method and has a good interpretability in mathematics; 2) BOE can significantly reduce the effort to tune parameters for different data sets, because it only has one hyperparameter and is robust to the value of the parameter; 3) BOE is an online learning algorithm, which does not require the training data to be collected in advance; 4) BOE can achieve competitive results in real time for feature extraction (>275 frames/s and >120,000 events/s); and 5) the implementation complexity of BOE only involves some basic operations, e.g., addition and multiplication. This guarantees the hardware friendliness of our method. The experimental results on three popular AER databases (i.e., MNIST-dynamic vision sensor, Poker Card, and Posture) show that our method is remarkably faster than two recently proposed AER categorization systems while preserving a good classification accuracy.

  1. Singular value decomposition based feature extraction technique for physiological signal analysis.

    PubMed

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  2. Fault feature extraction of rolling bearing based on an improved cyclical spectrum density method

    NASA Astrophysics Data System (ADS)

    Li, Min; Yang, Jianhong; Wang, Xiaojing

    2015-11-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method, experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  3. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  4. Kernel regression based feature extraction for 3D MR image denoising.

    PubMed

    López-Rubio, Ezequiel; Florentín-Núñez, María Nieves

    2011-08-01

    Kernel regression is a non-parametric estimation technique which has been successfully applied to image denoising and enhancement in recent times. Magnetic resonance 3D image denoising has two features that distinguish it from other typical image denoising applications, namely the tridimensional structure of the images and the nature of the noise, which is Rician rather than Gaussian or impulsive. Here we propose a principled way to adapt the general kernel regression framework to this particular problem. Our noise removal system is rooted on a zeroth order 3D kernel regression, which computes a weighted average of the pixels over a regression window. We propose to obtain the weights from the similarities among small sized feature vectors associated to each pixel. In turn, these features come from a second order 3D kernel regression estimation of the original image values and gradient vectors. By considering directional information in the weight computation, this approach substantially enhances the performance of the filter. Moreover, Rician noise level is automatically estimated without any need of human intervention, i.e. our method is fully automated. Experimental results over synthetic and real images demonstrate that our proposal achieves good performance with respect to the other MRI denoising filters being compared.

  5. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    SciTech Connect

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  6. Adaboost face detector based on Joint Integral Histogram and Genetic Algorithms for feature extraction process.

    PubMed

    Jammoussi, Ameni Yangui; Ghribi, Sameh Fakhfakh; Masmoudi, Dorra Sellami

    2014-01-01

    Recently, many classes of objects can be efficiently detected by the way of machine learning techniques. In practice, boosting techniques are among the most widely used machine learning for various reasons. This is mainly due to low false positive rate of the cascade structure offering the possibility to be trained by different classes of object. However, it is especially used for face detection since it is the most popular sub-problem within object detection. The challenges of Adaboost based face detector include the selection of the most relevant features from a large feature set which are considered as weak classifiers. In many scenarios, however, selection of features based on lowering classification errors leads to computation complexity and excess of memory use. In this work, we propose a new method to train an effective detector by discarding redundant weak classifiers while achieving the pre-determined learning objective. To achieve this, on the one hand, we modify AdaBoost training so that the feature selection process is not based any more on the weak learner's training error. This is by incorporating the Genetic Algorithm (GA) on the training process. On the other hand, we make use of the Joint Integral Histogram in order to extract more powerful features. Experimental performance on human faces show that our proposed method requires smaller number of weak classifiers than the conventional learning algorithm, resulting in higher learning and faster classification rates. So, our method outperforms significantly state-of-the-art cascade methods in terms of detection rate and false positive rate and especially in reducing the number of weak classifiers per stage.

  7. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    PubMed

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  8. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  9. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  10. A Distinguishing Arterial Pulse Waves Approach by Using Image Processing and Feature Extraction Technique.

    PubMed

    Chen, Hsing-Chung; Kuo, Shyi-Shiun; Sun, Shen-Ching; Chang, Chia-Hui

    2016-10-01

    Traditional Chinese Medicine (TCM) is based on five main types of diagnoses methods consisting of inspection, auscultation, olfaction, inquiry, and palpation. The most important one is palpation also called pulse diagnosis which is to measure wrist artery pulse by doctor's fingers for detecting patient's health state. In this paper, it is carried out by using a specialized pulse measuring instrument to classify one's pulse type. The measured pulse waves (MPWs) were segmented into the arterial pulse wave curve (APWC) by image proposing method. The slopes and periods among four specific points on the APWC were taken to be the pulse features. Three algorithms are proposed in this paper, which could extract these features from the APWCs and compared their differences between each of them to the average feature matrix, individually. These results show that the method proposed in this study is superior and more accurate than the previous studies. The proposed method could significantly save doctors a large amount of time, increase accuracy and decrease data volume.

  11. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 1: time domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    The paper presents an application of the gamma-absorption method to study a gas-liquid two-phase flow in a horizontal pipeline. In the tests on laboratory installation two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals have been used. The experimental set-up allows recording of stochastic signals, which describe instantaneous content of the stream in the particular cross-section of the flow mixture. The analyses of these signals by statistical methods allow to determine the mean velocity of the gas phase. Meanwhile, the selected features of signals provided by the absorption set, can be applied to recognition of the structure of the flow. In this work such three structures of air-water flow as: plug, bubble, and transitional plug - bubble one were considered. The recorded raw signals were analyzed in time domain and several features were extracted. It was found that following features of signals as the mean, standard deviation, root mean square (RMS), variance and 4th moment are most useful to recognize the structure of the flow.

  12. Computation identifies structural features that govern neuronal firing properties in slowly adapting touch receptors.

    PubMed

    Lesniak, Daine R; Marshall, Kara L; Wellnitz, Scott A; Jenkins, Blair A; Baba, Yoshichika; Rasband, Matthew N; Gerling, Gregory J; Lumpkin, Ellen A

    2014-01-01

    Touch is encoded by cutaneous sensory neurons with diverse morphologies and physiological outputs. How neuronal architecture influences response properties is unknown. To elucidate the origin of firing patterns in branched mechanoreceptors, we combined neuroanatomy, electrophysiology and computation to analyze mouse slowly adapting type I (SAI) afferents. These vertebrate touch receptors, which innervate Merkel cells, encode shape and texture. SAI afferents displayed a high degree of variability in touch-evoked firing and peripheral anatomy. The functional consequence of differences in anatomical architecture was tested by constructing network models representing sequential steps of mechanosensory encoding: skin displacement at touch receptors, mechanotransduction and action-potential initiation. A systematic survey of arbor configurations predicted that the arrangement of mechanotransduction sites at heminodes is a key structural feature that accounts in part for an afferent's firing properties. These findings identify an anatomical correlate and plausible mechanism to explain the driver effect first described by Adrian and Zotterman. DOI: http://dx.doi.org/10.7554/eLife.01488.001.

  13. Features of CPB: a Poisson-Boltzmann solver that uses an adaptive Cartesian grid.

    PubMed

    Fenley, Marcia O; Harris, Robert C; Mackoy, Travis; Boschitsch, Alexander H

    2015-02-05

    The capabilities of an adaptive Cartesian grid (ACG)-based Poisson-Boltzmann (PB) solver (CPB) are demonstrated. CPB solves various PB equations with an ACG, built from a hierarchical octree decomposition of the computational domain. This procedure decreases the number of points required, thereby reducing computational demands. Inside the molecule, CPB solves for the reaction-field component (ϕrf ) of the electrostatic potential (ϕ), eliminating the charge-induced singularities in ϕ. CPB can also use a least-squares reconstruction method to improve estimates of ϕ at the molecular surface. All surfaces, which include solvent excluded, Gaussians, and others, are created analytically, eliminating errors associated with triangulated surfaces. These features allow CPB to produce detailed surface maps of ϕ and compute polar solvation and binding free energies for large biomolecular assemblies, such as ribosomes and viruses, with reduced computational demands compared to other Poisson-Boltzmann equation solvers. The reader is referred to http://www.continuum-dynamics.com/solution-mm.html for how to obtain the CPB software.

  14. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue

  15. Entropy-based adaptive nuclear texture features are independent prognostic markers in a total population of uterine sarcomas.

    PubMed

    Nielsen, Birgitte; Hveem, Tarjei Sveinsgjerd; Kildal, Wanja; Abeler, Vera M; Kristensen, Gunnar B; Albregtsen, Fritz; Danielsen, Håvard E

    2015-04-01

    Nuclear texture analysis measures the spatial arrangement of the pixel gray levels in a digitized microscopic nuclear image and is a promising quantitative tool for prognosis of cancer. The aim of this study was to evaluate the prognostic value of entropy-based adaptive nuclear texture features in a total population of 354 uterine sarcomas. Isolated nuclei (monolayers) were prepared from 50 µm tissue sections and stained with Feulgen-Schiff. Local gray level entropy was measured within small windows of each nuclear image and stored in gray level entropy matrices, and two superior adaptive texture features were calculated from each matrix. The 5-year crude survival was significantly higher (P < 0.001) for patients with high texture feature values (72%) than for patients with low feature values (36%). When combining DNA ploidy classification (diploid/nondiploid) and texture (high/low feature value), the patients could be stratified into three risk groups with 5-year crude survival of 77, 57, and 34% (Hazard Ratios (HR) of 1, 2.3, and 4.1, P < 0.001). Entropy-based adaptive nuclear texture was an independent prognostic marker for crude survival in multivariate analysis including relevant clinicopathological features (HR = 2.1, P = 0.001), and should therefore be considered as a potential prognostic marker in uterine sarcomas.

  16. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  17. A Feature-adaptive Subdivision Method for Real-time 3D Reconstruction of Repeated Topology Surfaces

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Sun, Honghai

    2017-03-01

    It's well known that rendering for a large number of triangles with GPU hardware tessellation has made great progress. However, due to the fixed nature of GPU pipeline, many off-line methods that perform well can not meet the on-line requirements. In this paper, an optimized Feature-adaptive subdivision method is proposed, which is more suitable for reconstructing surfaces with repeated cusps or creases. An Octree primitive is established in irregular regions where there are the same sharp vertices or creases, this method can find the neighbor geometry information quickly. Because of having the same topology structure between Octree primitive and feature region, the Octree feature points can match the arbitrary vertices in feature region more precisely. In the meanwhile, the patches is re-encoded in the Octree primitive by using the breadth-first strategy, resulting in a meta-table which allows for real-time reconstruction by GPU hardware tessellation unit. There is only one feature region needed to be calculated under Octree primitive, other regions with the same repeated feature generate their own meta-table directly, the reconstruction time is saved greatly for this step. With regard to the meshes having a large number of repeated topology feature, our algorithm improves the subdivision time by 17.575% and increases the average frame drawing time by 0.2373 ms compared to the traditional FAS (Feature-adaptive Subdivision), at the same time the model can be reconstructed in a watertight manner.

  18. Extraction of Airport Features from High Resolution Satellite Imagery for Design and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, Chris; Qiu, You-Liang; Jensen, John R.; Schill, Steven R.; Floyd, Mike

    2001-01-01

    The LPA Group, consisting of 17 offices located throughout the eastern and central United States is an architectural, engineering and planning firm specializing in the development of Airports, Roads and Bridges. The primary focus of this ARC project is concerned with assisting their aviation specialists who work in the areas of Airport Planning, Airfield Design, Landside Design, Terminal Building Planning and design, and various other construction services. The LPA Group wanted to test the utility of high-resolution commercial satellite imagery for the purpose of extracting airport elevation features in the glide path areas surrounding the Columbia Metropolitan Airport. By incorporating remote sensing techniques into their airport planning process, LPA wanted to investigate whether or not it is possible to save time and money while achieving the equivalent accuracy as traditional planning methods. The Affiliate Research Center (ARC) at the University of South Carolina investigated the use of remotely sensed imagery for the extraction of feature elevations in the glide path zone. A stereo pair of IKONOS panchromatic satellite images, which has a spatial resolution of 1 x 1 m, was used to determine elevations of aviation obstructions such as buildings, trees, towers and fence-lines. A validation dataset was provided by the LPA Group to assess the accuracy of the measurements derived from the IKONOS imagery. The initial goal of this project was to test the utility of IKONOS imagery in feature extraction using ERDAS Stereo Analyst. This goal was never achieved due to problems with ERDAS software support of the IKONOS sensor model and the unavailability of imperative sensor model information from Space Imaging. The obstacles encountered in this project pertaining to ERDAS Stereo Analyst and IKONOS imagery will be reviewed in more detail later in this report. As a result of the technical difficulties with Stereo Analyst, ERDAS OrthoBASE was used to derive aviation

  19. A Joint Feature Extraction and Data Compression Method for Low Bit Rate Transmission in Distributed Acoustic Sensor Environments

    DTIC Science & Technology

    2004-12-01

    target classification. In this Phase I research, a subband-based joint detection, feature extraction, data compression/ encoding system for low bit...well as data compression/ encoding without incurring degradation in the overall performance. New methods for formation of the optimal sparse sensor arrays...Features & Data Compression ............................... 4 2.3 Encoding Methods ......... ......................................... 5 3 A joint

  20. Protein Function Prediction using Text-based Features extracted from the Biomedical Literature: The CAFA Challenge

    PubMed Central

    2013-01-01

    Background Advances in sequencing technology over the past decade have resulted in an abundance of sequenced proteins whose function is yet unknown. As such, computational systems that can automatically predict and annotate protein function are in demand. Most computational systems use features derived from protein sequence or protein structure to predict function. In an earlier work, we demonstrated the utility of biomedical literature as a source of text features for predicting protein subcellular location. We have also shown that the combination of text-based and sequence-based prediction improves the performance of location predictors. Following up on this work, for the Critical Assessment of Function Annotations (CAFA) Challenge, we developed a text-based system that aims to predict molecular function and biological process (using Gene Ontology terms) for unannotated proteins. In this paper, we present the preliminary work and evaluation that we performed for our system, as part of the CAFA challenge. Results We have developed a preliminary system that represents proteins using text-based features and predicts protein function using a k-nearest neighbour classifier (Text-KNN). We selected text features for our classifier by extracting key terms from biomedical abstracts based on their statistical properties. The system was trained and tested using 5-fold cross-validation over a dataset of 36,536 proteins. System performance was measured using the standard measures of precision, recall, F-measure and overall accuracy. The performance of our system was compared to two baseline classifiers: one that assigns function based solely on the prior distribution of protein function (Base-Prior) and one that assigns function based on sequence similarity (Base-Seq). The overall prediction accuracy of Text-KNN, Base-Prior, and Base-Seq for molecular function classes are 62%, 43%, and 58% while the overall accuracy for biological process classes are 17%, 11%, and 28

  1. [Study on simplification of extraction kinetics model and adaptability of total flavonoids model of Scutellariae radix].

    PubMed

    Chen, Yang; Zhang, Jin; Ni, Jian; Dong, Xiao-Xu; Xu, Meng-Jie; Dou, Hao-Ran; Shen, Ming-Rui; Yang, Bo-Di; Fu, Jing

    2014-01-01

    Because of irregular shapes of Chinese herbal pieces, we simplified the previously deduced general extraction kinetic model for TCMs, and integrated particle diameters of Chinese herbs that had been hard to be determined in the final parameter "a". The reduction of the direct determination of particle diameters of Chinese herbs was conducive to increase the accuracy of the model, expand the application scope of the model, and get closer to the actual production conditions. Finally, a simplified model was established, with its corresponding experimental methods and data processing methods determined. With total flavonoids in Scutellariae Radix as the determination index, we conducted a study on the adaptability of total flavonoids extracted from Scutellariae Radix with the water decoction method in the model. The results showed a good linear correlation among the natural logarithm value of the mass concentration of total flavonoids in Scutellariae Radix, the time and the changes in the natural logarithm of solvent multiple. Through calculating and fitting, efforts were made to establish the kinetic model of extracting total flavonoids from Scutellariae Radix with the water decoction method, and verify the model, with a good degree of fitting and deviation within the range of the industrial production requirements. This indicated that the model established by the method has a good adaptability.

  2. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  3. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut

    NASA Astrophysics Data System (ADS)

    Oesau, Sven; Lafarge, Florent; Alliez, Pierre

    2014-04-01

    We present a method for automatic reconstruction of permanent structures, such as walls, floors and ceilings, given a raw point cloud of an indoor scene. The main idea behind our approach is a graph-cut formulation to solve an inside/outside labeling of a space partitioning. We first partition the space in order to align the reconstructed models with permanent structures. The horizontal structures are located through analysis of the vertical point distribution, while vertical wall structures are detected through feature preserving multi-scale line fitting, followed by clustering in a Hough transform space. The final surface is extracted through a graph-cut formulation that trades faithfulness to measurement data for geometric complexity. A series of experiments show watertight surface meshes reconstructed from point clouds measured on multi-level buildings.

  4. [Study on the method of feature extraction for brain-computer interface using discriminative common vector].

    PubMed

    Wang, Jinjia; Hu, Bei

    2013-02-01

    Discriminative common vector (DCV) is an effective method that was proposed for the small sample size problems of face recognition. There is the same problem in brain-computer interface (BCI). Using directly the linear discriminative analysis (LDA) could result in errors because of the singularity of the within-class matrix of data. In our studies, we used the DCV method from the common vector theory in the within-class scatter matrix of data of all classes, and then applied eigenvalue decomposition to the common vectors to obtain the final projected vectors. Then we used kernel discriminative common vector (KDCV) with different kernel. Three data sets that include BCI Competition I data set, Competition II data set IV, and a data set collected by ourselves were used in the experiments. The experiment results of 93%, 77% and 97% showed that this feature extraction method could be used well in the classification of imagine data in BCI.

  5. Feature extraction and recognition for rolling element bearing fault utilizing short-time Fourier transform and non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Gao, Huizhong; Liang, Lin; Chen, Xiaoguang; Xu, Guanghua

    2015-01-01

    Due to the non-stationary characteristics of vibration signals acquired from rolling element bearing fault, the time-frequency analysis is often applied to describe the local information of these unstable signals smartly. However, it is difficult to classify the high dimensional feature matrix directly because of too large dimensions for many classifiers. This paper combines the concepts of time-frequency distribution(TFD) with non-negative matrix factorization(NMF), and proposes a novel TFD matrix factorization method to enhance representation and identification of bearing fault. Throughout this method, the TFD of a vibration signal is firstly accomplished to describe the localized faults with short-time Fourier transform(STFT). Then, the supervised NMF mapping is adopted to extract the fault features from TFD. Meanwhile, the fault samples can be clustered and recognized automatically by using the clustering property of NMF. The proposed method takes advantages of the NMF in the parts-based representation and the adaptive clustering. The localized fault features of interest can be extracted as well. To evaluate the performance of the proposed method, the 9 kinds of the bearing fault on a test bench is performed. The proposed method can effectively identify the fault severity and different fault types. Moreover, in comparison with the artificial neural network(ANN), NMF yields 99.3% mean accuracy which is much superior to ANN. This research presents a simple and practical resolution for the fault diagnosis problem of rolling element bearing in high dimensional feature space.

  6. A novel feature extraction methodology for region classification in lidar data

    NASA Astrophysics Data System (ADS)

    Varney, Nina M.; Asari, Vijayan K.; Sargent, Garrett C.

    2016-10-01

    LiDAR is a remote sensing method used to produce precise point clouds with millions of geo-spatially located 3D data points. The challenge comes when trying to accurately and efficiently segment and classify objects, especially in instances of occlusion and where objects are in close local proximity. The goal of this paper is to propose a more accurate and efficient way of performing segmentation and extracting features of objects in point clouds. Normal Octree Region Merging (NORM) is a segmentation technique based on surface normal similarities, and it subdivides the object points into clusters. The idea behind the technique of surface normal calculation is that for a given neighborhood around each point, the normal of a plane which best fits that set of points can be considered to be the surface normal at that particular point. Next, an octree-based segmentation approach is applied by dividing the entire scene into eight bins, 2 x 2 x 2 in the X, Y, and Z direction. Then for each of these bins, the variance of all the elevation angles corresponding to the surface normal within that bin is calculated and if the elevation angle falls below a certain threshold, the bin is divided into eight more bins. This process is repeated until the entire scene consists of different sized bins, all containing surface normals with elevation variances below a given threshold. However, the octree-based segmentation process produces obvious over segmentation of most of the objects. In order to correct for this over segmentation, a region merging approach is applied. This region merging approach works much like the automatic seeded region growing technique, which is an already well known technique, with the exception that instead of using height to measure similarity, a histogram signature is used. Each cluster generated from the previous NORM segmentation technique is then run through a Shape-based Eigen Local Feature (SELF) algorithm, where the focus is on calculating normalized

  7. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  8. Extracting drug-drug interactions from literature using a rich feature-based linear kernel approach

    PubMed Central

    Kim, Sun; Yeganova, Lana; Wilbur, W. John

    2015-01-01

    Identifying unknown drug interactions is of great benefit in the early detection of adverse drug reactions. Despite existence of several resources for drug-drug interaction (DDI) information, the wealth of such information is buried in a body of unstructured medical text which is growing exponentially. This calls for developing text mining techniques for identifying DDIs. The state-of-the-art DDI extraction methods use Support Vector Machines (SVMs) with non-linear composite kernels to explore diverse contexts in literature. While computationally less expensive, linear kernel-based systems have not achieved a comparable performance in DDI extraction tasks. In this work, we propose an efficient and scalable system using a linear kernel to identify DDI information. The proposed approach consists of two steps: identifying DDIs and assigning one of four different DDI types to the predicted drug pairs. We demonstrate that when equipped with a rich set of lexical and syntactic features, a linear SVM classifier is able to achieve a competitive performance in detecting DDIs. In addition, the one-against-one strategy proves vital for addressing an imbalance issue in DDI type classification. Applied to the DDIExtraction 2013 corpus, our system achieves an F1 score of 0.670, as compared to 0.651 and 0.609 reported by the top two participating teams in the DDIExtraction 2013 challenge, both based on non-linear kernel methods. PMID:25796456

  9. Feature extraction and classifcation in surface grading application using multivariate statistical projection models

    NASA Astrophysics Data System (ADS)

    Prats-Montalbán, José M.; López, Fernando; Valiente, José M.; Ferrer, Alberto

    2007-01-01

    In this paper we present an innovative way to simultaneously perform feature extraction and classification for the quality control issue of surface grading by applying two well known multivariate statistical projection tools (SIMCA and PLS-DA). These tools have been applied to compress the color texture data describing the visual appearance of surfaces (soft color texture descriptors) and to directly perform classification using statistics and predictions computed from the extracted projection models. Experiments have been carried out using an extensive image database of ceramic tiles (VxC TSG). This image database is comprised of 14 different models, 42 surface classes and 960 pieces. A factorial experimental design has been carried out to evaluate all the combinations of several factors affecting the accuracy rate. Factors include tile model, color representation scheme (CIE Lab, CIE Luv and RGB) and compression/classification approach (SIMCA and PLS-DA). In addition, a logistic regression model is fitted from the experiments to compute accuracy estimates and study the factors effect. The results show that PLS-DA performs better than SIMCA, achieving a mean accuracy rate of 98.95%. These results outperform those obtained in a previous work where the soft color texture descriptors in combination with the CIE Lab color space and the k-NN classi.er achieved a 97.36% of accuracy.

  10. Exact feature extraction using finite rate of innovation principles with an application to image super-resolution.

    PubMed

    Baboulaz, Loïc; Dragotti, Pier Luigi

    2009-02-01

    The accurate registration of multiview images is of central importance in many advanced image processing applications. Image super-resolution, for example, is a typical application where the quality of the super-resolved image is degrading as registration errors increase. Popular registration methods are often based on features extracted from the acquired images. The accuracy of the registration is in this case directly related to the number of extracted features and to the precision at which the features are located: images are best registered when many features are found with a good precision. However, in low-resolution images, only a few features can be extracted and often with a poor precision. By taking a sampling perspective, we propose in this paper new methods for extracting features in low-resolution images in order to develop efficient registration techniques. We consider, in particular, the sampling theory of signals with finite rate of innovation and show that some features of interest for registration can be retrieved perfectly in this framework, thus allowing an exact registration. We also demonstrate through simulations that the sampling model which enables the use of finite rate of innovation principles is well suited for modeling the acquisition of images by a camera. Simulations of image registration and image super-resolution of artificially sampled images are first presented, analyzed and compared to traditional techniques. We finally present favorable experimental results of super-resolution of real images acquired by a digital camera available on the market.

  11. Adaptive reproducing kernel particle method for extraction of the cortical surface.

    PubMed

    Xu, Meihe; Thompson, Paul M; Toga, Arthur W

    2006-06-01

    We propose a novel adaptive approach based on the Reproducing Kernel Particle Method (RKPM) to extract the cortical surfaces of the brain from three-dimensional (3-D) magnetic resonance images (MRIs). To formulate the discrete equations of the deformable model, a flexible particle shape function is employed in the Galerkin approximation of the weak form of the equilibrium equations. The proposed support generation method ensures that support of all particles cover the entire computational domains. The deformable model is adaptively adjusted by dilating the shape function and by inserting or merging particles in the high curvature regions or regions stopped by the target boundary. The shape function of the particle with a dilation parameter is adaptively constructed in response to particle insertion or merging. The proposed method offers flexibility in representing highly convolved structures and in refining the deformable models. Self-intersection of the surface, during evolution, is prevented by tracing backward along gradient descent direction from the crest interface of the distance field, which is computed by fast marching. These operations involve a significant computational cost. The initial model for the deformable surface is simple and requires no prior knowledge of the segmented structure. No specific template is required, e.g., an average cortical surface obtained from many subjects. The extracted cortical surface efficiently localizes the depths of the cerebral sulci, unlike some other active surface approaches that penalize regions of high curvature. Comparisons with manually segmented landmark data are provided to demonstrate the high accuracy of the proposed method. We also compare the proposed method to the finite element method, and to a commonly used cortical surface extraction approach, the CRUISE method. We also show that the independence of the shape functions of the RKPM from the underlying mesh enhances the convergence speed of the deformable

  12. Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Levenson, Richard M.; Rimm, David L.

    2003-05-01

    Recent developments in imaging technology mean that it is now possible to obtain high-resolution histological image data at multiple wavelengths. This allows pathologists to image specimens over a full spectrum, thereby revealing (often subtle) distinctions between different types of tissue. With this type of data, the spectral content of the specimens, combined with quantitative spatial feature characterization may make it possible not only to identify the presence of an abnormality, but also to classify it accurately. However, such are the quantities and complexities of these data, that without new automated techniques to assist in the data analysis, the information contained in the data will remain inaccessible to those who need it. We investigate the application of a recently developed system for the automated analysis of multi-/hyper-spectral satellite image data to the problem of cancer detection from multispectral histopathology image data. The system provides a means for a human expert to provide training data simply by highlighting regions in an image using a computer mouse. Application of these feature extraction techniques to examples of both training and out-of-training-sample data demonstrate that these, as yet unoptimized, techniques already show promise in the discrimination between benign and malignant cells from a variety of samples.

  13. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  14. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  15. Understanding the effects of pre-processing on extracted signal features from gait accelerometry signals.

    PubMed

    Millecamps, Alexandre; Lowry, Kristin A; Brach, Jennifer S; Perera, Subashan; Redfern, Mark S; Sejdić, Ervin

    2015-07-01

    Gait accelerometry is an important approach for gait assessment. Previous contributions have adopted various pre-processing approaches for gait accelerometry signals, but none have thoroughly investigated the effects of such pre-processing operations on the obtained results. Therefore, this paper investigated the influence of pre-processing operations on signal features extracted from gait accelerometry signals. These signals were collected from 35 participants aged over 65years: 14 of them were healthy controls (HC), 10 had Parkinson׳s disease (PD) and 11 had peripheral neuropathy (PN). The participants walked on a treadmill at preferred speed. Signal features in time, frequency and time-frequency domains were computed for both raw and pre-processed signals. The pre-processing stage consisted of applying tilt correction and denoising operations to acquired signals. We first examined the effects of these operations separately, followed by the investigation of their joint effects. Several important observations were made based on the obtained results. First, the denoising operation alone had almost no effects in comparison to the trends observed in the raw data. Second, the tilt correction affected the reported results to a certain degree, which could lead to a better discrimination between groups. Third, the combination of the two pre-processing operations yielded similar trends as the tilt correction alone. These results indicated that while gait accelerometry is a valuable approach for the gait assessment, one has to carefully adopt any pre-processing steps as they alter the observed findings.

  16. GNAR-GARCH model and its application in feature extraction for rolling bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Jiaxin; Xu, Feiyun; Huang, Kai; Huang, Ren

    2017-09-01

    Given its simplicity of modeling and sensitivity to condition variations, time series model is widely used in feature extraction to realize fault classification and diagnosis. However, nonlinear and nonstationary characteristics common in fault signals of rolling bearing bring challenges to the diagnosis. In this paper, a hybrid model, the combination of a general expression for linear and nonlinear autoregressive (GNAR) model and a generalized autoregressive conditional heteroscedasticity (GARCH) model, (i.e., GNAR-GARCH), is proposed and applied to rolling bearing fault diagnosis. An exact expression of GNAR-GARCH model is given. Maximum likelihood method is used for parameter estimation and modified Akaike Information Criterion is adopted for structure identification of GNAR-GARCH model. The main advantage of this novel model over other models is that the combination makes the model suitable for nonlinear and nonstationary signals. It is verified with statistical tests that contain comparisons among the different time series models. Finally, GNAR-GARCH model is applied to fault diagnosis by modeling mechanical vibration signals including simulation and real data. With the parameters estimated and taken as feature vectors, k-nearest neighbor algorithm is utilized to realize the classification of fault status. The results show that GNAR-GARCH model exhibits higher accuracy and better performance than do other models.

  17. Extracting topological features from dynamical measures in networks of Kuramoto oscillators

    NASA Astrophysics Data System (ADS)

    Prignano, Luce; Díaz-Guilera, Albert

    2012-03-01

    The Kuramoto model for an ensemble of coupled oscillators provides a paradigmatic example of nonequilibrium transitions between an incoherent and a synchronized state. Here we analyze populations of almost identical oscillators in arbitrary interaction networks. Our aim is to extract topological features of the connectivity pattern from purely dynamical measures based on the fact that in a heterogeneous network the global dynamics is not only affected by the distribution of the natural frequencies but also by the location of the different values. In order to perform a quantitative study we focused on a very simple frequency distribution considering that all the frequencies are equal but one, that of the pacemaker node. We then analyze the dynamical behavior of the system at the transition point and slightly above it as well as very far from the critical point, when it is in a highly incoherent state. The gathered topological information ranges from local features, such as the single-node connectivity, to the hierarchical structure of functional clusters and even to the entire adjacency matrix.

  18. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  19. Automated object extraction from remote sensor image based on adaptive thresholding technique

    NASA Astrophysics Data System (ADS)

    Zhao, Tongzhou; Ma, Shuaijun; Li, Jin; Ming, Hui; Luo, Xiaobo

    2009-10-01

    Detection and extraction of the dim moving small objects in the infrared image sequences is an interesting research area. A system for detection of the dim moving small targets in the IR image sequences is presented, and a new algorithm having high performance for extracting moving small targets in infrared image sequences containing cloud clutter is proposed in the paper. This method can get the better detection precision than some other methods, and two independent units can realize the calculative process. The novelty of the algorithm is that it uses adaptive thresholding technique of the moving small targets in both the spatial domain and temporal domain. The results of experiment show that the algorithm we presented has high ratio of detection precision.

  20. Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor

    PubMed Central

    Brandli, Christian; Mantel, Thomas A.; Hutter, Marco; Höpflinger, Markus A.; Berner, Raphael; Siegwart, Roland; Delbruck, Tobi

    2014-01-01

    Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm. PMID:24478619

  1. Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor.

    PubMed

    Brandli, Christian; Mantel, Thomas A; Hutter, Marco; Höpflinger, Markus A; Berner, Raphael; Siegwart, Roland; Delbruck, Tobi

    2013-01-01

    Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm.

  2. Extraction of time and frequency features from grip force rates during dexterous manipulation.

    PubMed

    Mojtahedi, Keivan; Fu, Qiushi; Santello, Marco

    2015-05-01

    The time course of grip force from object contact to onset of manipulation has been extensively studied to gain insight into the underlying control mechanisms. Of particular interest to the motor neuroscience and clinical communities is the phenomenon of bell-shaped grip force rate (GFR) that has been interpreted as indicative of feedforward force control. However, this feature has not been assessed quantitatively. Furthermore, the time course of grip force may contain additional features that could provide insight into sensorimotor control processes. In this study, we addressed these questions by validating and applying two computational approaches to extract features from GFR in humans: 1) fitting a Gaussian function to GFR and quantifying the goodness of the fit [root-mean-square error, (RMSE)]; and 2) continuous wavelet transform (CWT), where we assessed the correlation of the GFR signal with a Mexican Hat function. Experiment 1 consisted of a classic pseudorandomized presentation of object mass (light or heavy), where grip forces developed to lift a mass heavier than expected are known to exhibit corrective responses. For Experiment 2, we applied our two techniques to analyze grip force exerted for manipulating an inverted T-shaped object whose center of mass was changed across blocks of consecutive trials. For both experiments, subjects were asked to grasp the object at either predetermined or self-selected grasp locations ("constrained" and "unconstrained" task, respectively). Experiment 1 successfully validated the use of RMSE and CWT as they correctly distinguished trials with versus without force corrective responses. RMSE and CWT also revealed that grip force is characterized by more feedback-driven corrections when grasping at self-selected contact points. Future work will examine the application of our analytical approaches to a broader range of tasks, e.g., assessment of recovery of sensorimotor function following clinical intervention, interlimb

  3. Adaptation to second order stimulus features by electrosensory neurons causes ambiguity

    PubMed Central

    Zhang, Zhubo D.; Chacron, Maurice J.

    2016-01-01

    Understanding the coding strategies used to process sensory input remains a central problem in neuroscience. Growing evidence suggests that sensory systems process natural stimuli efficiently by ensuring a close match between neural tuning and stimulus statistics through adaptation. However, adaptation causes ambiguity as the same response can be elicited by different stimuli. The mechanisms by which the brain resolves ambiguity remain poorly understood. Here we investigated adaptation in electrosensory pyramidal neurons within different parallel maps in the weakly electric fish Apteronotus leptorhynchus. In response to step increases in stimulus variance, we found that pyramidal neurons within the lateral segment (LS) displayed strong scale invariant adaptation whereas those within the centromedial segment (CMS) instead displayed weaker degrees of scale invariant adaptation. Signal detection analysis revealed that strong adaptation in LS neurons significantly reduced stimulus discriminability. In contrast, weaker adaptation displayed by CMS neurons led to significantly lesser impairment of discriminability. Thus, while LS neurons display adaptation that is matched to natural scene statistics, thereby optimizing information transmission, CMS neurons instead display weaker adaptation and would instead provide information about the context in which these statistics occur. We propose that such a scheme is necessary for decoding by higher brain structures. PMID:27349635

  4. Adaptation to second order stimulus features by electrosensory neurons causes ambiguity.

    PubMed

    Zhang, Zhubo D; Chacron, Maurice J

    2016-06-28

    Understanding the coding strategies used to process sensory input remains a central problem in neuroscience. Growing evidence suggests that sensory systems process natural stimuli efficiently by ensuring a close match between neural tuning and stimulus statistics through adaptation. However, adaptation causes ambiguity as the same response can be elicited by different stimuli. The mechanisms by which the brain resolves ambiguity remain poorly understood. Here we investigated adaptation in electrosensory pyramidal neurons within different parallel maps in the weakly electric fish Apteronotus leptorhynchus. In response to step increases in stimulus variance, we found that pyramidal neurons within the lateral segment (LS) displayed strong scale invariant adaptation whereas those within the centromedial segment (CMS) instead displayed weaker degrees of scale invariant adaptation. Signal detection analysis revealed that strong adaptation in LS neurons significantly reduced stimulus discriminability. In contrast, weaker adaptation displayed by CMS neurons led to significantly lesser impairment of discriminability. Thus, while LS neurons display adaptation that is matched to natural scene statistics, thereby optimizing information transmission, CMS neurons instead display weaker adaptation and would instead provide information about the context in which these statistics occur. We propose that such a scheme is necessary for decoding by higher brain structures.

  5. Efficient 3D texture feature extraction from CT images for computer-aided diagnosis of pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Han, Fangfang; Wang, Huafeng; Song, Bowen; Zhang, Guopeng; Lu, Hongbing; Moore, William; Liang, Zhengrong; Zhao, Hong

    2014-03-01

    Texture feature from chest CT images for malignancy assessment of pulmonary nodules has become an un-ignored and efficient factor in Computer-Aided Diagnosis (CADx). In this paper, we focus on extracting as fewer as needed efficient texture features, which can be combined with other classical features (e.g. size, shape, growing rate, etc.) for assisting lung nodule diagnosis. Based on a typical calculation algorithm of texture features, namely Haralick features achieved from the gray-tone spatial-dependence matrices, we calculated two dimensional (2D) and three dimensional (3D) Haralick features from the CT images of 905 nodules. All of the CT images were downloaded from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), which is the largest public chest database. 3D Haralick feature model of thirteen directions contains more information from the relationships on the neighbor voxels of different slices than 2D features from only four directions. After comparing the efficiencies of 2D and 3D Haralick features applied on the diagnosis of nodules, principal component analysis (PCA) algorithm was used to extract as fewer as needed efficient texture features. To achieve an objective assessment of the texture features, the support vector machine classifier was trained and tested repeatedly for one hundred times. And the statistical results of the classification experiments were described by an average receiver operating characteristic (ROC) curve. The mean value (0.8776) of the area under the ROC curves in our experiments can show that the two extracted 3D Haralick projected features have the potential to assist the classification of benign and malignant nodules.

  6. ActiveTutor: Towards More Adaptive Features in an E-Learning Framework

    ERIC Educational Resources Information Center

    Fournier, Jean-Pierre; Sansonnet, Jean-Paul

    2008-01-01

    Purpose: This paper aims to sketch the emerging notion of auto-adaptive software when applied to e-learning software. Design/methodology/approach: The study and the implementation of the auto-adaptive architecture are based on the operational framework "ActiveTutor" that is used for teaching the topic of computer science programming in first-grade…

  7. Pelvis feature extraction and classification of Cardiff body match rig base measurements for input into a knowledge-based system.

    PubMed

    Partlow, Adam; Gibson, Colin; Kulon, Janusz; Wilson, Ian; Wilcox, Steven

    2012-11-01

    The purpose of this paper is to determine whether it is possible to use an automated measurement tool to clinically classify clients who are wheelchair users with severe musculoskeletal deformities, replacing the current process which relies upon clinical engineers with advanced knowledge and skills. Clients' body shapes were captured using the Cardiff Body Match (CBM) Rig developed by the Rehabilitation Engineering Unit (REU) at Rookwood Hospital in Cardiff. A bespoke feature extraction algorithm was developed that estimates the position of external landmarks on clients' pelvises so that useful measurements can be obtained. The outputs of the feature extraction algorithms were compared to CBM measurements where the positions of the client's pelvis landmarks were known. The results show that using the extracted features facilitated classification. Qualitative analysis showed that the estimated positions of the landmark points were close enough to their actual positions to be useful to clinicians undertaking clinical assessments.

  8. A MapReduce scheme for image feature extraction and its application to man-made object detection

    NASA Astrophysics Data System (ADS)

    Cai, Fei; Chen, Honghui

    2013-07-01

    A fundamental challenge in image engineering is how to locate interested objects from high-resolution images with efficient detection performance. Several man-made objects detection approaches have been proposed while the majority of these methods are not truly timesaving and suffer low degree of detection precision. To address this issue, we propose a novel approach for man-made object detection in aerial image involving MapReduce scheme for large scale image analysis to support image feature extraction, which can be widely used to compute-intensive tasks in a highly parallel way, and texture feature extraction and clustering. Comprehensive experiments show that the parallel framework saves voluminous time for feature extraction with satisfied objects detection performance.

  9. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  10. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  11. Segmentation and feature extraction of cervical spine x-ray images

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1999-05-01

    As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.

  12. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    SciTech Connect

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  13. Knuckle-walking anteater: a convergence test of adaptation for purported knuckle-walking features of African Hominidae.

    PubMed

    Orr, Caley M

    2005-11-01

    Appeals to synapomorphic features of the wrist and hand in African apes, early hominins, and modern humans as evidence of knuckle-walking ancestry for the hominin lineage rely on accurate interpretations of those features as adaptations to knuckle-walking locomotion. Because Gorilla, Pan, and Homo share a relatively close common ancestor, the interpretation of such features is confounded somewhat by phylogeny. The study presented here examines the evolution of a similar locomotor regime in New World anteaters (order Xenarthra, family Myrmecophagidae) and uses the terrestrial giant anteater (Myrmecophaga tridactyla) as a convergence test of adaptation for purported knuckle-walking features of the Hominidae. During the stance phase of locomotion, Myrmecophaga transmits loads through flexed digits and a vertical manus, with hyperextension occurring at the metacarpophalangeal joints of the weight-bearing rays. This differs from the locomotion of smaller, arboreal anteaters of outgroup genera Tamandua and Cyclopes that employ extended wrist postures during above-branch quadrupedality. A number of features shared by Myrmecophaga and Pan and Gorilla facilitate load transmission or limit extension, thereby stabilizing the wrist and hand during knuckle-walking, and distinguish these taxa from their respective outgroups. These traits are a distally extended dorsal ridge of the distal radius, proximal expansion of the nonarticular surface of the dorsal capitate, a pronounced articular ridge on the dorsal aspects of the load-bearing metacarpal heads, and metacarpal heads that are wider dorsally than volarly. Only the proximal expansion of the nonarticular area of the dorsal capitate distinguishes knuckle-walkers from digitigrade cercopithecids, but features shared with digitigrade primates might be adaptive to the use of a vertical manus of some sort in the stance phase of terrestrial locomotion. The appearance of capitate nonarticular expansion and the dorsal ridge of the

  14. Diagnostic efficacy of computer extracted image features in optical coherence tomography of the precancerous cervix

    PubMed Central

    Kang, Wei; Qi, Xin; Tresser, Nancy J.; Kareta, Margarita; Belinson, Jerome L.; Rollins, Andrew M.

    2011-01-01

    Purpose: To determine the diagnostic efficacy of optical coherence tomography (OCT) to identify cervical intraepithelial neoplasia (CIN) grade 2 or higher by computer-aided diagnosis (CADx). Methods: OCT has been investigated as a screening∕diagnostic tool in the management of preinvasive and early invasive cancers of the uterine cervix. In this study, an automated algorithm was developed to extract OCT image features and identify CIN 2 or higher. First, the cervical epithelium was detected by a combined watershed and active contour method. Second, four features were calculated: The thickness of the epithelium and its standard deviation and the contrast between the epithelium and the stroma and its standard deviation. Finally, linear discriminant analysis was applied to classify images into two categories: Normal∕inflammation∕CIN 1 and CIN 2∕CIN 3. The algorithm was applied to 152 images (74 patients) obtained from an international study. Results: The numbers of normal∕inflammatory∕CIN 1∕CIN 2∕CIN 3 images are 74, 29, 14, 24, and 11, respectively. Tenfold cross-validation predicted the algorithm achieved a sensitivity of 51% (95% CI: 36%–67%) and a specificity of 92% (95% CI: 86%–96%) with an empirical two-category prior probability estimated from the data set. Receiver operating characteristic analysis yielded an area under the curve of 0.86. Conclusions: The diagnostic efficacy of CADx in OCT imaging to differentiate high-grade CIN from normal∕low grade CIN is demonstrated. The high specificity of OCT with CADx suggests further investigation as an effective secondary screening tool when combined with a highly sensitive primary screening tool. PMID:21361180

  15. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  16. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  17. Feature extraction and wall motion classification of 2D stress echocardiography with support vector machines

    NASA Astrophysics Data System (ADS)

    Chykeyuk, Kiryl; Clifton, David A.; Noble, J. Alison

    2011-03-01

    Stress echocardiography is a common clinical procedure for diagnosing heart disease. Clinically, diagnosis of the heart wall motion depends mostly on visual assessment, which is highly subjective and operator-dependent. Introduction of automated methods for heart function assessment have the potential to minimise the variance in operator assessment. Automated wall motion analysis consists of two main steps: (i) segmentation of heart wall borders, and (ii) classification of heart function as either "normal" or "abnormal" based on the segmentation. This paper considers automated classification of rest and stress echocardiography. Most previous approaches to the classification of heart function have considered rest or stress data separately, and have only considered using features extracted from the two main frames (corresponding to the end-of-diastole and end-of-systole). One previous attempt [1] has been made to combine information from rest and stress sequences utilising a Hidden Markov Model (HMM), which has proven to be the best performing approach to date. Here, we propose a novel alternative feature selection approach using combined information from rest and stress sequences for motion classification of stress echocardiography, utilising a Support Vector Machines (SVM) classifier. We describe how the proposed SVM-based method overcomes difficulties that occur with HMM classification. Overall accuracy with the new method for global wall motion classification using datasets from 173 patients is 92.47%, and the accuracy of local wall motion classification is 87.20%, showing that the proposed method outperforms the current state-of-the-art HMM-based approach (for which global and local classification accuracy is 82.15% and 78.33%, respectively).

  18. Extraction of morphological features from biological models and cells by Fourier analysis of static light scatter measurements

    SciTech Connect

    Burger, D.E.; Jett, J.H.; Mullaney, P.F.

    1982-03-01

    Models of biological cells of varying geometric complexity were used to generate data to test a method of extracting geometric features from light scatter distributions. Measurements of the dynamic range and angular distribution of intensity and light scatter from these models was compared to the distributions predicted by a complete theory of light scatter (Mie) and by diffraction theory (Fraunhofer). An approximation to the Fraunhofer theory provides a means of obtaining size and shape features from the data by a spectrum analysis. Experimental verification using nucleated erythrocytes as the biological material show the potential application of this method for the extraction of important size and shape parameters from light scatter data.

  19. Flutter signal extracting technique based on FOG and self-adaptive sparse representation algorithm

    NASA Astrophysics Data System (ADS)

    Lei, Jian; Meng, Xiangtao; Xiang, Zheng

    2016-10-01

    Due to various moving parts inside, when a spacecraft runs in orbits, its structure could get a minor angular vibration, which results in vague image formation of space camera. Thus, image compensation technique is required to eliminate or alleviate the effect of movement on image formation and it is necessary to realize precise measuring of flutter angle. Due to the advantages such as high sensitivity, broad bandwidth, simple structure and no inner mechanical moving parts, FOG (fiber optical gyro) is adopted in this study to measure minor angular vibration. Then, movement leading to image degeneration is achieved by calculation. The idea of the movement information extracting algorithm based on self-adaptive sparse representation is to use arctangent function approximating L0 norm to construct unconstrained noisy-signal-aimed sparse reconstruction model and then solve the model by a method based on steepest descent algorithm and BFGS algorithm to estimate sparse signal. Then taking the advantage of the principle of random noises not able to be represented by linear combination of elements, useful signal and random noised are separated effectively. Because the main interference of minor angular vibration to image formation of space camera is random noises, sparse representation algorithm could extract useful information to a large extent and acts as a fitting pre-process method of image restoration. The self-adaptive sparse representation algorithm presented in this paper is used to process the measured minor-angle-vibration signal of FOG used by some certain spacecraft. By component analysis of the processing results, we can find out that the algorithm could extract micro angular vibration signal of FOG precisely and effectively, and can achieve the precision degree of 0.1".

  20. Sparsity-enabled signal decomposition using tunable Q-factor wavelet transform for fault feature extraction of gearbox

    NASA Astrophysics Data System (ADS)

    Cai, Gaigai; Chen, Xuefeng; He, Zhengjia

    2013-12-01

    Localized faults in gearboxes tend to result in periodic shocks and thus arouse periodic responses in vibration signals. Feature extraction has always been a key problem for localized fault diagnosis. This paper proposes a new fault feature extraction technique for gearboxes by using sparsity-enabled signal decomposition method. The sparsity-enabled signal decomposition method separates signals based on the oscillatory behavior of the signal rather than the frequency or scale. Thus, the fault feature can be nonlinearly extracted from vibration signals. During the implementation of the proposed method, tunable Q-factor wavelet transform, for which the Q-factor can be easily specified, is adopted to represent vibration signals in a sparse way, and then morphological component analysis (MCA) is employed to estimate and separate the distinct components. The corresponding optimization problem of MCA is solved by the split augmented Lagrangian shrinkage algorithm (SALSA). With the proposed method, vibration signals of the faulty gearbox can be nonlinearly decomposed into high-oscillatory component and low-oscillatory component which is the fault feature of gearboxes. To evaluate the performance of the proposed method, this paper investigates the effect of two parameters pertinent to MCA and SALSA: the Lagrange multiplier and the penalty parameter. The effectiveness of the proposed method is verified by both the simulated and practical gearbox vibration signals. Results show the proposed method outperforms empirical mode decomposition and spectral kurtosis in extracting fault features of gearboxes.

  1. Sensor-Based Auto-Focusing System Using Multi-Scale Feature Extraction and Phase Correlation Matching

    PubMed Central

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-01-01

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems. PMID:25763645

  2. Sensor-based auto-focusing system using multi-scale feature extraction and phase correlation matching.

    PubMed

    Jang, Jinbeum; Yoo, Yoonjong; Kim, Jongheon; Paik, Joonki

    2015-03-10

    This paper presents a novel auto-focusing system based on a CMOS sensor containing pixels with different phases. Robust extraction of features in a severely defocused image is the fundamental problem of a phase-difference auto-focusing system. In order to solve this problem, a multi-resolution feature extraction algorithm is proposed. Given the extracted features, the proposed auto-focusing system can provide the ideal focusing position using phase correlation matching. The proposed auto-focusing (AF) algorithm consists of four steps: (i) acquisition of left and right images using AF points in the region-of-interest; (ii) feature extraction in the left image under low illumination and out-of-focus blur; (iii) the generation of two feature images using the phase difference between the left and right images; and (iv) estimation of the phase shifting vector using phase correlation matching. Since the proposed system accurately estimates the phase difference in the out-of-focus blurred image under low illumination, it can provide faster, more robust auto focusing than existing systems.

  3. A new automated spectral feature extraction method and its application in spectral classification and defective spectra recovery

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Guo, Ping; Luo, A.-Li

    2017-03-01

    Spectral feature extraction is a crucial procedure in automated spectral analysis. This procedure starts from the spectral data and produces informative and non-redundant features, facilitating the subsequent automated processing and analysis with machine-learning and data-mining techniques. In this paper, we present a new automated feature extraction method for astronomical spectra, with application in spectral classification and defective spectra recovery. The basic idea of our approach is to train a deep neural network to extract features of spectra with different levels of abstraction in different layers. The deep neural network is trained with a fast layer-wise learning algorithm in an analytical way without any iterative optimization procedure. We evaluate the performance of the proposed scheme on real-world spectral data. The results demonstrate that our method is superior regarding its comprehensive performance, and the computational cost is significantly lower than that for other methods. The proposed method can be regarded as a new valid alternative general-purpose feature extraction method for various tasks in spectral data analysis.

  4. Antioxidant capacity of leaf extracts from two Stevia rebaudiana Bertoni varieties adapted to cultivation in Mexico.

    PubMed

    Ruiz Ruiz, Jorge Carlos; Moguel Ordoñez, Yolanda Beatriz; Matus Basto, Ángel; Segura Campos, Maira Rubi

    2014-09-12

    The recent introduction of the cultivation of Stevia rebaudiana Bertoni in Mexico has gained interest for its potential use as a non-caloric sweetener, but some other properties of this plant require studies. Extracts from two varieties of S. rebaudiana Bertoni adapted to cultivation in Mexico were screened for their content of some phytochemicals and antioxidant properties. Total pigments, total phenolic and flavonoids contents of the extracts ranged between 17.7-24.3 mg/g, 28.7-28.4 mg/g, and 39.3-36.7 mg/g, respectively. The variety "Criolla" exhibited higher contents of pigments and flavonoids. Trolox equivalent antioxidant capacity ranged between 618.5-623.7 mM/mg and DPPH decolorization assay ranged between 86.4-84.3%, no significant differences were observed between varieties. Inhibition of β-carotene bleaching ranged between 62.3-77.9%, with higher activity in the variety "Criolla". Reducing power ranged between 85.2-86% and the chelating activity ranged between 57.3-59.4% for Cu²⁺ and between 52.2-54.4% for Fe²⁺, no significant differences were observed between varieties. In conclusion, the results of this study showed that polar compounds obtained during the extraction like chlorophylls, carotenoids, phenolic compounds and flavonoids contribute to the antioxidative activity measured. The leaves of S. rebaudiana Bertoni could be used not only as a source of non-caloric sweeteners but also naturally occurring antioxidants.

  5. Pathologic stratification of operable lung adenocarcinoma using radiomics features extracted from dual energy CT images

    PubMed Central

    Lee, Ho Yun; Sohn, Insuk; Kim, Hye Seung; Son, Ji Ye; Kwon, O Jung; Choi, Joon Young; Lee, Kyung Soo; Shim, Young Mog

    2017-01-01

    Purpose To evaluate the usefulness of surrogate biomarkers as predictors of histopathologic tumor grade and aggressiveness using radiomics data from dual-energy computed tomography (DECT), with the ultimate goal of accomplishing stratification of early-stage lung adenocarcinoma for optimal treatment. Results Pathologic grade was divided into grades 1, 2, and 3. Multinomial logistic regression analysis revealed i-uniformity and 97.5th percentile CT attenuation value as independent significant factors to stratify grade 2 or 3 from grade 1. The AUC value calculated from leave-one-out cross-validation procedure for discriminating grades 1, 2, and 3 was 0.9307 (95% CI: 0.8514–1), 0.8610 (95% CI: 0.7547–0.9672), and 0.8394 (95% CI: 0.7045–0.9743), respectively. Materials and Methods A total of 80 patients with 91 clinically and radiologically suspected stage I or II lung adenocarcinoma were prospectively enrolled. All patients underwent DECT and F-18-fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT, followed by surgery. Quantitative CT and PET imaging characteristics were evaluated using a radiomics approach. Significant features for a tumor aggressiveness prediction model were extracted and used to calculate diagnostic performance for predicting all pathologic grades. Conclusions Quantitative radiomics values from DECT imaging metrics can help predict pathologic aggressiveness of lung adenocarcinoma. PMID:27880938

  6. Image Outlier Detection and Feature Extraction via L1-Norm-Based 2D Probabilistic PCA.

    PubMed

    Ju, Fujiao; Sun, Yanfeng; Gao, Junbin; Hu, Yongli; Yin, Baocai

    2015-12-01

    This paper introduces an L1-norm-based probabilistic principal component analysis model on 2D data (L1-2DPPCA) based on the assumption of the Laplacian noise model. The Laplacian or L1 density function can be expressed as a superposition of an infinite number of Gaussian distributions. Under this expression, a Bayesian inference can be established based on the variational expectation maximization approach. All the key parameters in the probabilistic model can be learned by the proposed variational algorithm. It has experimentally been demonstrated that the newly introduced hidden variables in the superposition can serve as an effective indicator for data outliers. Experiments on some publicly available databases show that the performance of L1-2DPPCA has largely been improved after identifying and removing sample outliers, resulting in more accurate image reconstruction than the existing PCA-based methods. The performance of feature extraction of the proposed method generally outperforms other existing algorithms in terms of reconstruction errors and classification accuracy.

  7. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads.

  8. Extraction and evaluation of gas-flow-dependent features from dynamic measurements of gas sensors array

    NASA Astrophysics Data System (ADS)

    Kalinowski, Paweł; Woźniak, Łukasz; Jasiński, Grzegorz; Jasiński, Piotr

    2016-11-01

    Gas analyzers based on gas sensors are the devices which enable recognition of various kinds of volatile compounds. They have continuously been developed and investigated for over three decades, however there are still limitations which slow down the implementation of those devices in many applications. For example, the main drawbacks are the lack of selectivity, sensitivity and long term stability of those devices caused by the drift of utilized sensors. This implies the necessity of investigations not only in the field of development of gas sensors construction, but also the development of measurement procedures or methods of analysis of sensor responses which compensate the limitations of sensors devices. One of the fields of investigations covers the dynamic measurements of sensors or sensor-arrays response with the utilization of flow modulation techniques. Different gas delivery patterns enable the possibility of extraction of unique features which improves the stability and selectivity of gas detecting systems. In this article three utilized flow modulation techniques are presented, together with the proposition of the evaluation method of their usefulness and robustness in environmental pollutants detecting systems. The results of dynamic measurements of an commercially available TGS sensor array in the presence of nitrogen dioxide and ammonia are shown.

  9. A new feature extraction method for signal classification applied to cord dorsum potential detection

    NASA Astrophysics Data System (ADS)

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  10. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 2: frequency domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    Knowledge of the structure of a flow is really significant for the proper conduct a number of industrial processes. In this case a description of a two-phase flow regimes is possible by use of the time-series analysis e.g. in frequency domain. In this article the classical spectral analysis based on Fourier Transform (FT) and Short-Time Fourier Transform (STFT) were applied for analysis of signals obtained for water-air flow using gamma ray absorption. The presented method was illustrated by use data collected in experiments carried out on the laboratory hydraulic installation with a horizontal pipe of 4.5 m length and inner diameter of 30 mm equipped with two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals. Stochastic signals obtained from detectors for plug, bubble, and transitional plug - bubble flows were considered in this work. The recorded raw signals were analyzed and several features in the frequency domain were extracted using autospectral density function (ADF), cross-spectral density function (CSDF), and the STFT spectrogram. In result of a detail analysis it was found that the most promising to recognize of the flow structure are: maximum value of the CSDF magnitude, sum of the CSDF magnitudes in the selected frequency range, and the maximum value of the sum of selected amplitudes of STFT spectrogram.

  11. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  12. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  13. Texture feature extraction based on wavelet transform and gray-level co-occurrence matrices applied to osteosarcoma diagnosis.

    PubMed

    Hu, Shan; Xu, Chao; Guan, Weiqiao; Tang, Yong; Liu, Yana

    2014-01-01

    Osteosarcoma is the most common malignant bone tumor among children and adolescents. In this study, image texture analysis was made to extract texture features from bone CR images to evaluate the recognition rate of osteosarcoma. To obtain the optimal set of features, Sym4 and Db4 wavelet transforms and gray-level co-occurrence matrices were applied to the image, with statistical methods being used to maximize the feature selection. To evaluate the performance of these methods, a support vector machine algorithm was used. The experimental results demonstrated that the Sym4 wavelet had a higher classification accuracy (93.44%) than the Db4 wavelet with respect to osteosarcoma occurrence in the epiphysis, whereas the Db4 wavelet had a higher classification accuracy (96.25%) for osteosarcoma occurrence in the diaphysis. Results including accuracy, sensitivity, specificity and ROC curves obtained using the wavelets were all higher than those obtained using the features derived from the GLCM method. It is concluded that, a set of texture features can be extracted from the wavelets and used in computer-aided osteosarcoma diagnosis systems. In addition, this study also confirms that multi-resolution analysis is a useful tool for texture feature extraction during bone CR image processing.

  14. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these

  15. [The research on separating and extracting overlapping spectral feature lines in LIBS using damped least squares method].

    PubMed

    Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo

    2015-02-01

    In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral

  16. Fault feature extraction of planet gear in wind turbine gearbox based on spectral kurtosis and time wavelet energy spectrum

    NASA Astrophysics Data System (ADS)

    Kong, Yun; Wang, Tianyang; Li, Zheng; Chu, Fulei

    2017-01-01

    Planetary transmission plays a vital role in wind turbine drivetrains, and its fault diagnosis has been an important and challenging issue. Owing to the complicated and coupled vibration source, time-variant vibration transfer path, and heavy background noise masking effect, the vibration signal of planet gear in wind turbine gearboxes exhibits several unique characteristics: Complex frequency components, low signal-to-noise ratio, and weak fault feature. In this sense, the periodic impulsive components induced by a localized defect are hard to extract, and the fault detection of planet gear in wind turbines remains to be a challenging research work. Aiming to extract the fault feature of planet gear effectively, we propose a novel feature extraction method based on spectral kurtosis and time wavelet energy spectrum (SK-TWES) in the paper. Firstly, the spectral kurtosis (SK) and kurtogram of raw vibration signals are computed and exploited to select the optimal filtering parameter for the subsequent band-pass filtering. Then, the band-pass filtering is applied to extrude periodic transient impulses using the optimal frequency band in which the corresponding SK value is maximal. Finally, the time wavelet energy spectrum analysis is performed on the filtered signal, selecting Morlet wavelet as the mother wavelet which possesses a high similarity to the impulsive components. The experimental signals collected from the wind turbine gearbox test rig demonstrate that the proposed method is effective at the feature extraction and fault diagnosis for the planet gear with a localized defect.

  17. Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Keller, Brenton; Cunefare, David; Grewal, Dilraj S.; Mahmoud, Tamer H.; Izatt, Joseph A.; Farsiu, Sina

    2016-07-01

    We introduce a metric in graph search and demonstrate its application for segmenting retinal optical coherence tomography (OCT) images of macular pathology. Our proposed "adjusted mean arc length" (AMAL) metric is an adaptation of the lowest mean arc length search technique for automated OCT segmentation. We compare this method to Dijkstra's shortest path algorithm, which we utilized previously in our popular graph theory and dynamic programming segmentation technique. As an illustrative example, we show that AMAL-based length-adaptive segmentation outperforms the shortest path in delineating the retina/vitreous boundary of patients with full-thickness macular holes when compared with expert manual grading.

  18. Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines

    NASA Astrophysics Data System (ADS)

    Sadat Hashemipour, Maryam; Soleimani, Seyed Ali

    2016-01-01

    Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM trai