Science.gov

Sample records for adaptive feature extraction

  1. Adaptive feature extraction expert

    SciTech Connect

    Yuschik, M.

    1983-01-01

    The identification of discriminatory features places an upper bound on the recognition rate of any automatic speech recognition (ASR) system. One way to structure the extraction of features is to construct an expert system which applies a set of rules to identify particular properties of the speech patterns. However, these patterns vary for an individual speaker and from speaker to speaker so that another expert is actually needed to learn the new variations. The author investigates the problem by using sets of discriminatory features that are suggested by a feature generation expert, improves the selectivity of these features with a training expert, and finally develops a minimally spanning feature set with a statistical selection expert. 12 references.

  2. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  3. Adaptive spectral window sizes for extraction of diagnostic features from optical spectra

    NASA Astrophysics Data System (ADS)

    Kan, Chih-Wen; Lee, Andy Y.; Nieman, Linda T.; Sokolov, Konstantin; Markey, Mia K.

    2010-07-01

    We present an approach to adaptively adjust the spectral window sizes for optical spectra feature extraction. Previous studies extracted features from spectral windows of a fixed width. In our algorithm, piecewise linear regression is used to adaptively adjust the window sizes to find the maximum window size with reasonable linear fit with the spectrum. This adaptive windowing technique ensures the signal linearity in defined windows; hence, the adaptive windowing technique retains more diagnostic information while using fewer windows. This method was tested on a data set of diffuse reflectance spectra of oral mucosa lesions. Eight features were extracted from each window. We performed classifications using linear discriminant analysis with cross-validation. Using windowing techniques results in better classification performance than not using windowing. The area under the receiver-operating-characteristics curve for windowing techniques was greater than a nonwindowing technique for both normal versus mild dysplasia (MD) plus severe high-grade dysplasia or carcinama (SD) (MD+SD) and benign versus MD+SD. Although adaptive and fixed-size windowing perform similarly, adaptive windowing utilizes significantly fewer windows than fixed-size windows (number of windows per spectrum: 8 versus 16). Because adaptive windows retain most diagnostic information while reducing the number of windows needed for feature extraction, our results suggest that it isolates unique diagnostic features in optical spectra.

  4. Feature extraction using adaptive multiwavelets and synthetic detection index for rotor fault diagnosis of rotating machinery

    NASA Astrophysics Data System (ADS)

    Lu, Na; Xiao, Zhihuai; Malik, O. P.

    2015-02-01

    State identification to diagnose the condition of rotating machinery is often converted to a classification problem of values of non-dimensional symptom parameters (NSPs). To improve the sensitivity of the NSPs to the changes in machine condition, a novel feature extraction method based on adaptive multiwavelets and the synthetic detection index (SDI) is proposed in this paper. Based on the SDI maximization principle, optimal multiwavelets are searched by genetic algorithms (GAs) from an adaptive multiwavelets library and used for extracting fault features from vibration signals. By the optimal multiwavelets, more sensitive NSPs can be extracted. To examine the effectiveness of the optimal multiwavelets, conventional methods are used for comparison study. The obtained NSPs are fed into K-means classifier to diagnose rotor faults. The results show that the proposed method can effectively improve the sensitivity of the NSPs and achieve a higher discrimination rate for rotor fault diagnosis than the conventional methods.

  5. Human action classification using adaptive key frame interval for feature extraction

    NASA Astrophysics Data System (ADS)

    Lertniphonphan, Kanokphan; Aramvith, Supavadee; Chalidabhongse, Thanarat H.

    2016-01-01

    Human action classification based on the adaptive key frame interval (AKFI) feature extraction is presented. Since human movement periods are different, the action intervals that contain the intensive and compact motion information are considered in this work. We specify AKFI by analyzing an amount of motion through time. The key frame is defined to be the local minimum interframe motion, which is computed by using frame differencing between consecutive frames. Once key frames are detected, the features within a segmented period are encoded by adaptive motion history image and key pose history image. The action representation consists of the local orientation histogram of the features during AKFI. The experimental results on Weizmann dataset, KTH dataset, and UT Interaction dataset demonstrate that the features can effectively classify action and can classify irregular cases of walking compared to other well-known algorithms.

  6. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  7. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli

    PubMed Central

    Kumar, Neeraj

    2016-01-01

    The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions. PMID:26823516

  8. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    PubMed Central

    Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  9. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    PubMed

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  10. Adaptive feature extraction techniques for subpixel target detections in hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Yuen, Peter W. T.; Bishop, Gary J.

    2004-12-01

    Most target detection algorithms employed in hyperspectral remote sensing rely on a measurable difference between the spectral signature of the target and background. Matched filter techniques which utilise a set of library spectra as filter for target detection are often found to be unsatisfactory because of material variability and atmospheric effects in the field data. The aim of this paper is to report an algorithm which extracts features directly from the scene to act as matched filters for target detection. Methods based upon spectral unmixing using geometric simplex volume maximisation (SVM) and independent component analysis (ICA) were employed to generate features of the scene. Target and background like features are then differentiated, and automatically selected, from the endmember set of the unmixed result according to their statistics. Anomalies are then detected from the selected endmember set and their corresponding spectral characteristics are subsequently extracted from the scene, serving as a bank of matched filters for detection. This method, given the acronym SAFED, has a number of advantages for target detection, compared to previous techniques which use orthogonal subspace of the background feature. This paper reports the detection capability of this new technique by using an example simulated hyperspectral scene. Similar results using hyperspectral military data show high detection accuracy with negligible false alarms. Further potential applications of this technique for false alarm rate (FAR) reduction via multiple approach fusion (MAF), and, as a means for thresholding the anomaly detection technique, are outlined.

  11. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  12. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures

    PubMed Central

    Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's “flexibility” and “customizability,” namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  13. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback.

  14. EEG-Based BCI System Using Adaptive Features Extraction and Classification Procedures.

    PubMed

    Mondini, Valeria; Mangia, Anna Lisa; Cappello, Angelo

    2016-01-01

    Motor imagery is a common control strategy in EEG-based brain-computer interfaces (BCIs). However, voluntary control of sensorimotor (SMR) rhythms by imagining a movement can be skilful and unintuitive and usually requires a varying amount of user training. To boost the training process, a whole class of BCI systems have been proposed, providing feedback as early as possible while continuously adapting the underlying classifier model. The present work describes a cue-paced, EEG-based BCI system using motor imagery that falls within the category of the previously mentioned ones. Specifically, our adaptive strategy includes a simple scheme based on a common spatial pattern (CSP) method and support vector machine (SVM) classification. The system's efficacy was proved by online testing on 10 healthy participants. In addition, we suggest some features we implemented to improve a system's "flexibility" and "customizability," namely, (i) a flexible training session, (ii) an unbalancing in the training conditions, and (iii) the use of adaptive thresholds when giving feedback. PMID:27635129

  15. Vertical Feature Mask Feature Classification Flag Extraction

    Atmospheric Science Data Center

    2013-03-28

      Vertical Feature Mask Feature Classification Flag Extraction This routine demonstrates extraction of the ... in a CALIPSO Lidar Level 2 Vertical Feature Mask feature classification flag value. It is written in Interactive Data Language (IDL) ...

  16. A new adaptive algorithm for automated feature extraction in exponentially damped signals for health monitoring of smart structures

    NASA Astrophysics Data System (ADS)

    Qarib, Hossein; Adeli, Hojjat

    2015-12-01

    In this paper authors introduce a new adaptive signal processing technique for feature extraction and parameter estimation in noisy exponentially damped signals. The iterative 3-stage method is based on the adroit integration of the strengths of parametric and nonparametric methods such as multiple signal categorization, matrix pencil, and empirical mode decomposition algorithms. The first stage is a new adaptive filtration or noise removal scheme. The second stage is a hybrid parametric-nonparametric signal parameter estimation technique based on an output-only system identification technique. The third stage is optimization of estimated parameters using a combination of the primal-dual path-following interior point algorithm and genetic algorithm. The methodology is evaluated using a synthetic signal and a signal obtained experimentally from transverse vibrations of a steel cantilever beam. The method is successful in estimating the frequencies accurately. Further, it estimates the damping exponents. The proposed adaptive filtration method does not include any frequency domain manipulation. Consequently, the time domain signal is not affected as a result of frequency domain and inverse transformations.

  17. Finite State Machine with Adaptive Electromyogram (EMG) Feature Extraction to Drive Meal Assistance Robot

    NASA Astrophysics Data System (ADS)

    Zhang, Xiu; Wang, Xingyu; Wang, Bei; Sugi, Takenao; Nakamura, Masatoshi

    Surface electromyogram (EMG) from elbow, wrist and hand has been widely used as an input of multifunction prostheses for many years. However, for patients with high-level limb deficiencies, muscle activities in upper-limbs are not strong enough to be used as control signals. In this paper, EMG from lower-limbs is acquired and applied to drive a meal assistance robot. An onset detection method with adaptive threshold based on EMG power is proposed to recognize different muscle contractions. Predefined control commands are output by finite state machine (FSM), and applied to operate the robot. The performance of EMG control is compared with joystick control by both objective and subjective indices. The results show that FSM provides the user with an easy-performing control strategy, which successfully operates robots with complicated control commands by limited muscle motions. The high accuracy and comfortableness of the EMG-control meal assistance robot make it feasible for users with upper limbs motor disabilities.

  18. Feature extraction through LOCOCODE.

    PubMed

    Hochreiter, S; Schmidhuber, J

    1999-04-01

    Low-complexity coding and decoding (LOCOCODE) is a novel approach to sensory coding and unsupervised learning. Unlike previous methods, it explicitly takes into account the information-theoretic complexity of the code generator. It computes lococodes that convey information about the input data and can be computed and decoded by low-complexity mappings. We implement LOCOCODE by training autoassociators with flat minimum search, a recent, general method for discovering low-complexity neural nets. It turns out that this approach can unmix an unknown number of independent data sources by extracting a minimal number of low-complexity features necessary for representing the data. Experiments show that unlike codes obtained with standard autoencoders, lococodes are based on feature detectors, never unstructured, usually sparse, and sometimes factorial or local (depending on statistical properties of the data). Although LOCOCODE is not explicitly designed to enforce sparse or factorial codes, it extracts optimal codes for difficult versions of the "bars" benchmark problem, whereas independent component analysis (ICA) and principal component analysis (PCA) do not. It produces familiar, biologically plausible feature detectors when applied to real-world images and codes with fewer bits per pixel than ICA and PCA. Unlike ICA, it does not need to know the number of independent sources. As a preprocessor for a vowel recognition benchmark problem, it sets the stage for excellent classification performance. Our results reveal an interesting, previously ignored connection between two important fields: regularizer research and ICA-related research. They may represent a first step toward unification of regularization and unsupervised learning.

  19. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  20. A feature extraction method of the particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization for Brillouin scattering spectra

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui

    2016-10-01

    A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.

  1. Recursive Feature Extraction in Graphs

    SciTech Connect

    2014-08-14

    ReFeX extracts recursive topological features from graph data. The input is a graph as a csv file and the output is a csv file containing feature values for each node in the graph. The features are based on topological counts in the neighborhoods of each nodes, as well as recursive summaries of neighbors' features.

  2. Contour inflections are adaptable features.

    PubMed

    Bell, Jason; Sampasivam, Sinthujaa; McGovern, David P; Meso, Andrew Isaac; Kingdom, Frederick A A

    2014-06-03

    An object's shape is a strong cue for visual recognition. Most models of shape coding emphasize the role of oriented lines and curves for coding an object's shape. Yet inflection points, which occur at the junction of two oppositely signed curves, are ubiquitous features in natural scenes and carry important information about the shape of an object. Using a visual aftereffect in which the perceived shape of a contour is changed following prolonged viewing of a slightly different-shaped contour, we demonstrate a specific aftereffect for a contour inflection. Control conditions show that this aftereffect cannot be explained by adaptation to either the component curves or to the local orientation at the point of inflection. Further, we show that the aftereffect transfers weakly to a compound curve without an inflection, ruling out a general compound curvature detector as an explanation of our findings. We assume however that there are adaptable mechanisms for coding other specific forms of compound curves. Taken together, our findings provide evidence that the human visual system contains specific mechanisms for coding contour inflections, further highlighting their role in shape and object coding.

  3. Features: Real-Time Adaptive Feature and Document Learning for Web Search.

    ERIC Educational Resources Information Center

    Chen, Zhixiang; Meng, Xiannong; Fowler, Richard H.; Zhu, Binhai

    2001-01-01

    Describes Features, an intelligent Web search engine that is able to perform real-time adaptive feature (i.e., keyword) and document learning. Explains how Features learns from users' document relevance feedback and automatically extracts and suggests indexing keywords relevant to a search query, and learns from users' keyword relevance feedback…

  4. Automated Extraction of Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne (Technical Monitor); Haimes, Robert

    2005-01-01

    Computational Fluid Dynamics (CFD) simulations are routinely performed as part of the design process of most fluid handling devices. In order to efficiently and effectively use the results of a CFD simulation, visualization tools are often used. These tools are used in all stages of the CFD simulation including pre-processing, interim-processing, and post-processing, to interpret the results. Each of these stages requires visualization tools that allow one to examine the geometry of the device, as well as the partial or final results of the simulation. An engineer will typically generate a series of contour and vector plots to better understand the physics of how the fluid is interacting with the physical device. Of particular interest are detecting features such as shocks, re-circulation zones, and vortices (which will highlight areas of stress and loss). As the demand for CFD analyses continues to increase the need for automated feature extraction capabilities has become vital. In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like; isc-surface, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snapshot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments). Methods must be developed to abstract the feature of interest and display it in a manner that physically makes sense.

  5. Online Feature Extraction Algorithms for Data Streams

    NASA Astrophysics Data System (ADS)

    Ozawa, Seiichi

    Along with the development of the network technology and high-performance small devices such as surveillance cameras and smart phones, various kinds of multimodal information (texts, images, sound, etc.) are captured real-time and shared among systems through networks. Such information is given to a system as a stream of data. In a person identification system based on face recognition, for example, image frames of a face are captured by a video camera and given to the system for an identification purpose. Those face images are considered as a stream of data. Therefore, in order to identify a person more accurately under realistic environments, a high-performance feature extraction method for streaming data, which can be autonomously adapted to the change of data distributions, is solicited. In this review paper, we discuss a recent trend on online feature extraction for streaming data. There have been proposed a variety of feature extraction methods for streaming data recently. Due to the space limitation, we here focus on the incremental principal component analysis.

  6. Automatic extraction of planetary image features

    NASA Technical Reports Server (NTRS)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  7. Feature Extraction Based on Decision Boundaries

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  8. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  9. Guidance in feature extraction to resolve uncertainty

    NASA Astrophysics Data System (ADS)

    Kovalerchuk, Boris; Kovalerchuk, Michael; Streltsov, Simon; Best, Matthew

    2013-05-01

    Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the "complexity trap". This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.

  10. Waveform feature extraction based on tauberian approximation.

    PubMed

    De Figueiredo, R J; Hu, C L

    1982-02-01

    A technique is presented for feature extraction of a waveform y based on its Tauberian approximation, that is, on the approximation of y by a linear combination of appropriately delayed versions of a single basis function x, i.e., y(t) = ¿M i = 1 aix(t - ¿i), where the coefficients ai and the delays ¿i are adjustable parameters. Considerations in the choice or design of the basis function x are given. The parameters ai and ¿i, i=1, . . . , M, are retrieved by application of a suitably adapted version of Prony's method to the Fourier transform of the above approximation of y. A subset of the parameters ai and ¿i, i = 1, . . . , M, is used to construct the feature vector, the value of which can be used in a classification algorithm. Application of this technique to the classification of wide bandwidth radar return signatures is presented. Computer simulations proved successful and are also discussed.

  11. Electronic Nose Feature Extraction Methods: A Review

    PubMed Central

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-01-01

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology. PMID:26540056

  12. ECG Feature Extraction using Time Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Nair, Mahesh A.

    The proposed algorithm is a novel method for the feature extraction of ECG beats based on Wavelet Transforms. A combination of two well-accepted methods, Pan Tompkins algorithm and Wavelet decomposition, this system is implemented with the help of MATLAB. The focus of this work is to implement the algorithm, which can extract the features of ECG beats with high accuracy. The performance of this system is evaluated in a pilot study using the MIT-BIH Arrhythmia database.

  13. Error margin analysis for feature gene extraction

    PubMed Central

    2010-01-01

    Background Feature gene extraction is a fundamental issue in microarray-based biomarker discovery. It is normally treated as an optimization problem of finding the best predictive feature genes that can effectively and stably discriminate distinct types of disease conditions, e.g. tumors and normals. Since gene microarray data normally involves thousands of genes at, tens or hundreds of samples, the gene extraction process may fall into local optimums if the gene set is optimized according to the maximization of classification accuracy of the classifier built from it. Results In this paper, we propose a novel gene extraction method of error margin analysis to optimize the feature genes. The proposed algorithm has been tested upon one synthetic dataset and two real microarray datasets. Meanwhile, it has been compared with five existing gene extraction algorithms on each dataset. On the synthetic dataset, the results show that the feature set extracted by our algorithm is the closest to the actual gene set. For the two real datasets, our algorithm is superior in terms of balancing the size and the validation accuracy of the resultant gene set when comparing to other algorithms. Conclusion Because of its distinct features, error margin analysis method can stably extract the relevant feature genes from microarray data for high-performance classification. PMID:20459827

  14. Facial Feature Extraction Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Viet

    Facial feature extraction is one of the most important processes in face recognition, expression recognition and face detection. The aims of facial feature extraction are eye location, shape of eyes, eye brow, mouth, head boundary, face boundary, chin and so on. The purpose of this paper is to develop an automatic facial feature extraction system, which is able to identify the eye location, the detailed shape of eyes and mouth, chin and inner boundary from facial images. This system not only extracts the location information of the eyes, but also estimates four important points in each eye, which helps us to rebuild the eye shape. To model mouth shape, mouth extraction gives us both mouth location and two corners of mouth, top and bottom lips. From inner boundary we obtain and chin, we have face boundary. Based on wavelet features, we can reduce the noise from the input image and detect edge information. In order to extract eyes, mouth, inner boundary, we combine wavelet features and facial character to design these algorithms for finding midpoint, eye's coordinates, four important eye's points, mouth's coordinates, four important mouth's points, chin coordinate and then inner boundary. The developed system is tested on Yale Faces and Pedagogy student's faces.

  15. Selective Extraction of Entangled Textures via Adaptive PDE Transform.

    PubMed

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2012-01-01

    Texture and feature extraction is an important research area with a wide range of applications in science and technology. Selective extraction of entangled textures is a challenging task due to spatial entanglement, orientation mixing, and high-frequency overlapping. The partial differential equation (PDE) transform is an efficient method for functional mode decomposition. The present work introduces adaptive PDE transform algorithm to appropriately threshold the statistical variance of the local variation of functional modes. The proposed adaptive PDE transform is applied to the selective extraction of entangled textures. Successful separations of human face, clothes, background, natural landscape, text, forest, camouflaged sniper and neuron skeletons have validated the proposed method.

  16. Large datasets: Segmentation, feature extraction, and compression

    SciTech Connect

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  17. Adaptive feature annotation for large video sensor networks

    NASA Astrophysics Data System (ADS)

    Cai, Yang; Bunn, Andrew; Liang, Peter; Yang, Bing

    2013-10-01

    We present an adaptive feature extraction and annotation algorithm for articulating traffic events from surveillance cameras. We use approximate median filter for moving object detection, motion energy image and convex hull for lane detection, and adaptive proportion models for vehicle classification. It is found that our approach outperforms three-dimensional modeling and scale-independent feature transformation algorithms in terms of robustness. The multiresolution-based video codec algorithm enables a quality-of-service-aware video streaming according to the data traffic. Furthermore, our empirical data shows that it is feasible to use the metadata to facilitate the real-time communication between an infrastructure and a vehicle for safer and more efficient traffic control.

  18. Feature extraction for structural dynamics model validation

    SciTech Connect

    Hemez, Francois; Farrar, Charles; Park, Gyuhae; Nishio, Mayuko; Worden, Keith; Takeda, Nobuo

    2010-11-08

    This study focuses on defining and comparing response features that can be used for structural dynamics model validation studies. Features extracted from dynamic responses obtained analytically or experimentally, such as basic signal statistics, frequency spectra, and estimated time-series models, can be used to compare characteristics of structural system dynamics. By comparing those response features extracted from experimental data and numerical outputs, validation and uncertainty quantification of numerical model containing uncertain parameters can be realized. In this study, the applicability of some response features to model validation is first discussed using measured data from a simple test-bed structure and the associated numerical simulations of these experiments. issues that must be considered were sensitivity, dimensionality, type of response, and presence or absence of measurement noise in the response. Furthermore, we illustrate a comparison method of multivariate feature vectors for statistical model validation. Results show that the outlier detection technique using the Mahalanobis distance metric can be used as an effective and quantifiable technique for selecting appropriate model parameters. However, in this process, one must not only consider the sensitivity of the features being used, but also correlation of the parameters being compared.

  19. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Lovely, David

    1999-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one "snap-shot" of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: (1) Shocks, (2) Vortex cores, (3) Regions of recirculation, (4) Boundary layers, (5) Wakes. Three papers and an initial specification for the (The Fluid eXtraction tool kit) FX Programmer's guide were included. The papers, submitted to the AIAA Computational Fluid Dynamics Conference, are entitled : (1) Using Residence Time for the Extraction of Recirculation Regions, (2) Shock Detection from Computational Fluid Dynamics results and (3) On the Velocity Gradient Tensor and Fluid Feature Extraction.

  20. SU-E-J-257: A PCA Model to Predict Adaptive Changes for Head&neck Patients Based On Extraction of Geometric Features From Daily CBCT Datasets

    SciTech Connect

    Chetvertkov, M; Siddiqui, F; Chetty, I; Kim, J; Kumarasiri, A; Liu, C; Gordon, J

    2015-06-15

    Purpose: Using daily cone beam CTs (CBCTs) to develop principal component analysis (PCA) models of anatomical changes in head and neck (H&N) patients and to assess the possibility of using these prospectively in adaptive radiation therapy (ART). Methods: Planning CT (pCT) images of 4 H&N patients were deformed to model several different systematic changes in patient anatomy during the course of the radiation therapy (RT). A Pinnacle plugin was used to linearly interpolate the systematic change in patient for the 35 fraction RT course and to generate a set of 35 synthetic CBCTs. Each synthetic CBCT represents the systematic change in patient anatomy for each fraction. Deformation vector fields (DVFs) were acquired between the pCT and synthetic CBCTs with random fraction-to-fraction changes were superimposed on the DVFs. A patient-specific PCA model was built using these DVFs containing systematic plus random changes. It was hypothesized that resulting eigenDVFs (EDVFs) with largest eigenvalues represent the major anatomical deformations during the course of treatment. Results: For all 4 patients, the PCA model provided different results depending on the type and size of systematic change in patient’s body. PCA was more successful in capturing the systematic changes early in the treatment course when these were of a larger scale with respect to the random fraction-to-fraction changes in patient’s anatomy. For smaller scale systematic changes, random changes in patient could completely “hide” the systematic change. Conclusion: The leading EDVF from the patientspecific PCA models could tentatively be identified as a major systematic change during treatment if the systematic change is large enough with respect to random fraction-to-fraction changes. Otherwise, leading EDVF could not represent systematic changes reliably. This work is expected to facilitate development of population-based PCA models that can be used to prospectively identify significant

  1. Automatic Feature Extraction from Planetary Images

    NASA Technical Reports Server (NTRS)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  2. PCA feature extraction for change detection in multidimensional unlabeled data.

    PubMed

    Kuncheva, Ludmila I; Faithfull, William J

    2014-01-01

    When classifiers are deployed in real-world applications, it is assumed that the distribution of the incoming data matches the distribution of the data used to train the classifier. This assumption is often incorrect, which necessitates some form of change detection or adaptive classification. While there has been a lot of work on change detection based on the classification error monitored over the course of the operation of the classifier, finding changes in multidimensional unlabeled data is still a challenge. Here, we propose to apply principal component analysis (PCA) for feature extraction prior to the change detection. Supported by a theoretical example, we argue that the components with the lowest variance should be retained as the extracted features because they are more likely to be affected by a change. We chose a recently proposed semiparametric log-likelihood change detection criterion that is sensitive to changes in both mean and variance of the multidimensional distribution. An experiment with 35 datasets and an illustration with a simple video segmentation demonstrate the advantage of using extracted features compared to raw data. Further analysis shows that feature extraction through PCA is beneficial, specifically for data with multiple balanced classes.

  3. Automated Extraction of Secondary Flow Features

    NASA Technical Reports Server (NTRS)

    Dorney, Suzanne M.; Haimes, Robert

    2005-01-01

    The use of Computational Fluid Dynamics (CFD) has become standard practice in the design and development of the major components used for air and space propulsion. To aid in the post-processing and analysis phase of CFD many researchers now use automated feature extraction utilities. These tools can be used to detect the existence of such features as shocks, vortex cores and separation and re-attachment lines. The existence of secondary flow is another feature of significant importance to CFD engineers. Although the concept of secondary flow is relatively understood there is no commonly accepted mathematical definition for secondary flow. This paper will present a definition for secondary flow and one approach for automatically detecting and visualizing secondary flow.

  4. Breast image feature learning with adaptive deconvolutional networks

    NASA Astrophysics Data System (ADS)

    Jamieson, Andrew R.; Drukker, Karen; Giger, Maryellen L.

    2012-03-01

    Feature extraction is a critical component of medical image analysis. Many computer-aided diagnosis approaches employ hand-designed, heuristic lesion extracted features. An alternative approach is to learn features directly from images. In this preliminary study, we explored the use of Adaptive Deconvolutional Networks (ADN) for learning high-level features in diagnostic breast mass lesion images with potential application to computer-aided diagnosis (CADx) and content-based image retrieval (CBIR). ADNs (Zeiler, et. al., 2011), are recently-proposed unsupervised, generative hierarchical models that decompose images via convolution sparse coding and max pooling. We trained the ADNs to learn multiple layers of representation for two breast image data sets on two different modalities (739 full field digital mammography (FFDM) and 2393 ultrasound images). Feature map calculations were accelerated by use of GPUs. Following Zeiler et. al., we applied the Spatial Pyramid Matching (SPM) kernel (Lazebnik, et. al., 2006) on the inferred feature maps and combined this with a linear support vector machine (SVM) classifier for the task of binary classification between cancer and non-cancer breast mass lesions. Non-linear, local structure preserving dimension reduction, Elastic Embedding (Carreira-Perpiñán, 2010), was then used to visualize the SPM kernel output in 2D and qualitatively inspect image relationships learned. Performance was found to be competitive with current CADx schemes that use human-designed features, e.g., achieving a 0.632+ bootstrap AUC (by case) of 0.83 [0.78, 0.89] for an ultrasound image set (1125 cases).

  5. Munitions related feature extraction from LIDAR data.

    SciTech Connect

    Roberts, Barry L.

    2010-06-01

    The characterization of former military munitions ranges is critical in the identification of areas likely to contain residual unexploded ordnance (UXO). Although these ranges are large, often covering tens-of-thousands of acres, the actual target areas represent only a small fraction of the sites. The challenge is that many of these sites do not have records indicating locations of former target areas. The identification of target areas is critical in the characterization and remediation of these sites. The Strategic Environmental Research and Development Program (SERDP) and Environmental Security Technology Certification Program (ESTCP) of the DoD have been developing and implementing techniques for the efficient characterization of large munitions ranges. As part of this process, high-resolution LIDAR terrain data sets have been collected over several former ranges. These data sets have been shown to contain information relating to former munitions usage at these ranges, specifically terrain cratering due to high-explosives detonations. The location and relative intensity of crater features can provide information critical in reconstructing the usage history of a range, and indicate areas most likely to contain UXO. We have developed an automated procedure using an adaptation of the Circular Hough Transform for the identification of crater features in LIDAR terrain data. The Circular Hough Transform is highly adept at finding circular features (craters) in noisy terrain data sets. This technique has the ability to find features of a specific radius providing a means of filtering features based on expected scale and providing additional spatial characterization of the identified feature. This method of automated crater identification has been applied to several former munitions ranges with positive results.

  6. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    2000-01-01

    In the past, feature extraction and identification were interesting concepts, but not required in understanding the physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines, were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of a great deal of interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense.

  7. Correlation metric for generalized feature extraction.

    PubMed

    Fu, Yun; Yan, Shuicheng; Huang, Thomas S

    2008-12-01

    Beyond conventional linear and kernel-based feature extraction, we propose in this paper the generalized feature extraction formulation based on the so-called Graph Embedding framework. Two novel correlation metric based algorithms are presented based on this formulation. Correlation Embedding Analysis (CEA), which incorporates both correlational mapping and discriminating analysis, boosts the discriminating power by mapping data from a high-dimensional hypersphere onto another low-dimensional hypersphere and preserving the intrinsic neighbor relations with local graph modeling. Correlational Principal Component Analysis (CPCA) generalizes the conventional Principal Component Analysis (PCA) algorithm to the case with data distributed on a high-dimensional hypersphere. Their advantages stem from two facts: 1) tailored to normalized data, which are often the outputs from the data preprocessing step, and 2) directly designed with correlation metric, which shows to be generally better than Euclidean distance for classification purpose. Extensive comparisons with existing algorithms on visual classification experiments demonstrate the effectiveness of the proposed algorithms. PMID:18988954

  8. Classification trees with neural network feature extraction.

    PubMed

    Guo, H; Gelfand, S B

    1992-01-01

    The ideal use of small multilayer nets at the decision nodes of a binary classification tree to extract nonlinear features is proposed. The nets are trained and the tree is grown using a gradient-type learning algorithm in the multiclass case. The method improves on standard classification tree design methods in that it generally produces trees with lower error rates and fewer nodes. It also reduces the problems associated with training large unstructured nets and transfers the problem of selecting the size of the net to the simpler problem of finding a tree of the right size. An efficient tree pruning algorithm is proposed for this purpose. Trees constructed with the method and the CART method are compared on a waveform recognition problem and a handwritten character recognition problem. The approach demonstrates significant decrease in error rate and tree size. It also yields comparable error rates and shorter training times than a large multilayer net trained with backpropagation on the same problems.

  9. Most information feature extraction (MIFE) approach for face recognition

    NASA Astrophysics Data System (ADS)

    Zhao, Jiali; Ren, Haibing; Wang, Haitao; Kee, Seokcheol

    2005-03-01

    We present a MIFE (Most Information Feature Extraction) approach, which extract as abundant as possible information for the face classification task. In the MIFE approach, a facial image is separated into sub-regions and each sub-region makes individual"s contribution for performing face recognition. Specifically, each sub-region is subjected to a sub-region based adaptive gamma (SadaGamma) correction or sub-region based histogram equalization (SHE) in order to account for different illuminations and expressions. Experiment results show that the proposed SadaGamma/SHE correction approach provides an efficient delighting solution for face recognition. MIFE and SadaGamma/SHE correction together achieves lower error ratio in face recognition under different illumination and expression.

  10. Automated Fluid Feature Extraction from Transient Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert

    1998-01-01

    In the past, feature extraction and identification were interesting concepts, but not required to understand the underlying physics of a steady flow field. This is because the results of the more traditional tools like iso-surfaces, cuts and streamlines were more interactive and easily abstracted so they could be represented to the investigator. These tools worked and properly conveyed the collected information at the expense of much interaction. For unsteady flow-fields, the investigator does not have the luxury of spending time scanning only one 'snap-shot' of the simulation. Automated assistance is required in pointing out areas of potential interest contained within the flow. This must not require a heavy compute burden (the visualization should not significantly slow down the solution procedure for co-processing environments like pV3). And methods must be developed to abstract the feature and display it in a manner that physically makes sense. The following is a list of the important physical phenomena found in transient (and steady-state) fluid flow: Shocks; Vortex ores; Regions of Recirculation; Boundary Layers; Wakes.

  11. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  12. Embedded prediction in feature extraction: application to single-trial EEG discrimination.

    PubMed

    Hsu, Wei-Yen

    2013-01-01

    In this study, an analysis system embedding neuron-fuzzy prediction in feature extraction is proposed for brain-computer interface (BCI) applications. Wavelet-fractal features combined with neuro-fuzzy predictions are applied for feature extraction in motor imagery (MI) discrimination. The features are extracted from the electroencephalography (EEG) signals recorded from participants performing left and right MI. Time-series predictions are performed by training 2 adaptive neuro-fuzzy inference systems (ANFIS) for respective left and right MI data. Features are then calculated from the difference in multi-resolution fractal feature vector (MFFV) between the predicted and actual signals through a window of EEG signals. Finally, the support vector machine is used for classification. The proposed method estimates its performance in comparison with the linear adaptive autoregressive (AAR) model and the AAR time-series prediction of 6 participants from 2 data sets. The results indicate that the proposed method is promising in MI classification. PMID:23248335

  13. Structural features determining thermal adaptation of esterases.

    PubMed

    Kovacic, Filip; Mandrysch, Agathe; Poojari, Chetan; Strodel, Birgit; Jaeger, Karl-Erich

    2016-02-01

    The adaptation of microorganisms to extreme living temperatures requires the evolution of enzymes with a high catalytic efficiency under these conditions. Such extremophilic enzymes represent valuable tools to study the relationship between protein stability, dynamics and function. Nevertheless, the multiple effects of temperature on the structure and function of enzymes are still poorly understood at the molecular level. Our analysis of four homologous esterases isolated from bacteria living at temperatures ranging from 10°C to 70°C suggested an adaptation route for the modulation of protein thermal properties through the optimization of local flexibility at the protein surface. While the biochemical properties of the recombinant esterases are conserved, their thermal properties have evolved to resemble those of the respective bacterial habitats. Molecular dynamics simulations at temperatures around the optimal temperatures for enzyme catalysis revealed temperature-dependent flexibility of four surface-exposed loops. While the flexibility of some loops increased with raising the temperature and decreased with lowering the temperature, as expected for those loops contributing to the protein stability, other loops showed an increment of flexibility upon lowering and raising the temperature. Preserved flexibility in these regions seems to be important for proper enzyme function. The structural differences of these four loops, distant from the active site, are substantially larger than for the overall protein structure, indicating that amino acid exchanges within these loops occurred more frequently thereby allowing the bacteria to tune atomic interactions for different temperature requirements without interfering with the overall enzyme function.

  14. Feature Adaptive Sampling for Scanning Electron Microscopy

    PubMed Central

    Dahmen, Tim; Engstler, Michael; Pauly, Christoph; Trampert, Patrick; de Jonge, Niels; Mücklich, Frank; Slusallek, Philipp

    2016-01-01

    A new method for the image acquisition in scanning electron microscopy (SEM) was introduced. The method used adaptively increased pixel-dwell times to improve the signal-to-noise ratio (SNR) in areas of high detail. In areas of low detail, the electron dose was reduced on a per pixel basis, and a-posteriori image processing techniques were applied to remove the resulting noise. The technique was realized by scanning the sample twice. The first, quick scan used small pixel-dwell times to generate a first, noisy image using a low electron dose. This image was analyzed automatically, and a software algorithm generated a sparse pattern of regions of the image that require additional sampling. A second scan generated a sparse image of only these regions, but using a highly increased electron dose. By applying a selective low-pass filter and combining both datasets, a single image was generated. The resulting image exhibited a factor of ≈3 better SNR than an image acquired with uniform sampling on a Cartesian grid and the same total acquisition time. This result implies that the required electron dose (or acquisition time) for the adaptive scanning method is a factor of ten lower than for uniform scanning. PMID:27150131

  15. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  16. 3D Feature Extraction for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Silver, Deborah

    1996-01-01

    Visualization techniques provide tools that help scientists identify observed phenomena in scientific simulation. To be useful, these tools must allow the user to extract regions, classify and visualize them, abstract them for simplified representations, and track their evolution. Object Segmentation provides a technique to extract and quantify regions of interest within these massive datasets. This article explores basic algorithms to extract coherent amorphous regions from two-dimensional and three-dimensional scalar unstructured grids. The techniques are applied to datasets from Computational Fluid Dynamics and those from Finite Element Analysis.

  17. Feature extraction from Doppler ultrasound signals for automated diagnostic systems.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-11-01

    This paper presented the assessment of feature extraction methods used in automated diagnosis of arterial diseases. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Different feature extraction methods were used to obtain feature vectors from ophthalmic and internal carotid arterial Doppler signals. In addition to this, the problem of selecting relevant features among the features available for the purpose of classification of Doppler signals was dealt with. Multilayer perceptron neural networks (MLPNNs) with different inputs (feature vectors) were used for diagnosis of ophthalmic and internal carotid arterial diseases. The assessment of feature extraction methods was performed by taking into consideration of performances of the MLPNNs. The performances of the MLPNNs were evaluated by the convergence rates (number of training epochs) and the total classification accuracies. Finally, some conclusions were drawn concerning the efficiency of discrete wavelet transform as a feature extraction method used for the diagnosis of ophthalmic and internal carotid arterial diseases. PMID:16278106

  18. Extracting textural features from tactile sensors.

    PubMed

    Edwards, J; Lawry, J; Rossiter, J; Melhuish, C

    2008-09-01

    This paper describes an experiment to quantify texture using an artificial finger equipped with a microphone to detect frictional sound. Using a microphone to record tribological data is a biologically inspired approach that emulates the Pacinian corpuscle. Artificial surfaces were created to constrain the subsequent analysis to specific textures. Recordings of the artificial surfaces were made to create a library of frictional sounds for data analysis. These recordings were mapped to the frequency domain using fast Fourier transforms for direct comparison, manipulation and quantifiable analysis. Numerical features such as modal frequency and average value were calculated to analyze the data and compared with attributes generated from principal component analysis (PCA). It was found that numerical features work well for highly constrained data but cannot classify multiple textural elements. PCA groups textures according to a natural similarity. Classification of the recordings using k nearest neighbors shows a high accuracy for PCA data. Clustering of the PCA data shows that similar discs are grouped together with few classification errors. In contrast, clustering of numerical features produces erroneous classification by splitting discs between clusters. The temperature of the finger is shown to have a direct relation to some of the features and subsequent data in PCA. PMID:18583731

  19. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  20. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  1. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  2. Direct extraction of topographic features from gray scale haracter images

    SciTech Connect

    Seong-Whan Lee; Young Joon Kim

    1994-12-31

    Optical character recognition (OCR) traditionally applies to binary-valued imagery although text is always scanned and stored in gray scale. However, binarization of multivalued image may remove important topological information from characters and introduce noise to character background. In order to avoid this problem, it is indispensable to develop a method which can minimize the information loss due to binarization by extracting features directly from gray scale character images. In this paper, we propose a new method for the direct extraction of topographic features from gray scale character images. By comparing the proposed method with the Wang and Pavlidis`s method we realized that the proposed method enhanced the performance of topographic feature extraction by computing the directions of principal curvature efficiently and prevented the extraction of unnecessary features. We also show that the proposed method is very effective for gray scale skeletonization compared to Levi and Montanari`s method.

  3. Model Based Analysis of Face Images for Facial Feature Extraction

    NASA Astrophysics Data System (ADS)

    Riaz, Zahid; Mayer, Christoph; Beetz, Michael; Radig, Bernd

    This paper describes a comprehensive approach to extract a common feature set from the image sequences. We use simple features which are easily extracted from a 3D wireframe model and efficiently used for different applications on a benchmark database. Features verstality is experimented on facial expressions recognition, face reognition and gender classification. We experiment different combinations of the features and find reasonable results with a combined features approach which contain structural, textural and temporal variations. The idea follows in fitting a model to human face images and extracting shape and texture information. We parametrize these extracted information from the image sequences using active appearance model (AAM) approach. We further compute temporal parameters using optical flow to consider local feature variations. Finally we combine these parameters to form a feature vector for all the images in our database. These features are then experimented with binary decision tree (BDT) and Bayesian Network (BN) for classification. We evaluated our results on image sequences of Cohn Kanade Facial Expression Database (CKFED). The proposed system produced very promising recognition rates for our applications with same set of features and classifiers. The system is also realtime capable and automatic.

  4. A Harmonic Linear Dynamical System for Prominent ECG Feature Extraction

    PubMed Central

    Nguyen Thi, Ngoc Anh; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series. PMID:24719648

  5. A harmonic linear dynamical system for prominent ECG feature extraction.

    PubMed

    Thi, Ngoc Anh Nguyen; Yang, Hyung-Jeong; Kim, SunHee; Do, Luu Ngoc

    2014-01-01

    Unsupervised mining of electrocardiography (ECG) time series is a crucial task in biomedical applications. To have efficiency of the clustering results, the prominent features extracted from preprocessing analysis on multiple ECG time series need to be investigated. In this paper, a Harmonic Linear Dynamical System is applied to discover vital prominent features via mining the evolving hidden dynamics and correlations in ECG time series. The discovery of the comprehensible and interpretable features of the proposed feature extraction methodology effectively represents the accuracy and the reliability of clustering results. Particularly, the empirical evaluation results of the proposed method demonstrate the improved performance of clustering compared to the previous main stream feature extraction approaches for ECG time series clustering tasks. Furthermore, the experimental results on real-world datasets show scalability with linear computation time to the duration of the time series.

  6. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  7. EEG signal features extraction based on fractal dimension.

    PubMed

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-08-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance. PMID:26737209

  8. Feature Extraction and Selection Strategies for Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  9. Image feature meaning for automatic key-frame extraction

    NASA Astrophysics Data System (ADS)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  10. Automated feature extraction and classification from image sources

    USGS Publications Warehouse

    U.S. Geological Survey

    1995-01-01

    The U.S. Department of the Interior, U.S. Geological Survey (USGS), and Unisys Corporation have completed a cooperative research and development agreement (CRADA) to explore automated feature extraction and classification from image sources. The CRADA helped the USGS define the spectral and spatial resolution characteristics of airborne and satellite imaging sensors necessary to meet base cartographic and land use and land cover feature classification requirements and help develop future automated geographic and cartographic data production capabilities. The USGS is seeking a new commercial partner to continue automated feature extraction and classification research and development.

  11. Distinctive Feature Extraction for Indian Sign Language (ISL) Gesture using Scale Invariant Feature Transform (SIFT)

    NASA Astrophysics Data System (ADS)

    Patil, Sandeep Baburao; Sinha, G. R.

    2016-07-01

    India, having less awareness towards the deaf and dumb peoples leads to increase the communication gap between deaf and hard hearing community. Sign language is commonly developed for deaf and hard hearing peoples to convey their message by generating the different sign pattern. The scale invariant feature transform was introduced by David Lowe to perform reliable matching between different images of the same object. This paper implements the various phases of scale invariant feature transform to extract the distinctive features from Indian sign language gestures. The experimental result shows the time constraint for each phase and the number of features extracted for 26 ISL gestures.

  12. Big data extraction with adaptive wavelet analysis (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Qu, Hongya; Chen, Genda; Ni, Yiqing

    2015-04-01

    Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.

  13. Fast SIFT design for real-time visual feature extraction.

    PubMed

    Chiu, Liang-Chi; Chang, Tian-Sheuan; Chen, Jiun-Yen; Chang, Nelson Yen-Chung

    2013-08-01

    Visual feature extraction with scale invariant feature transform (SIFT) is widely used for object recognition. However, its real-time implementation suffers from long latency, heavy computation, and high memory storage because of its frame level computation with iterated Gaussian blur operations. Thus, this paper proposes a layer parallel SIFT (LPSIFT) with integral image, and its parallel hardware design with an on-the-fly feature extraction flow for real-time application needs. Compared with the original SIFT algorithm, the proposed approach reduces the computational amount by 90% and memory usage by 95%. The final implementation uses 580-K gate count with 90-nm CMOS technology, and offers 6000 feature points/frame for VGA images at 30 frames/s and ∼ 2000 feature points/frame for 1920 × 1080 images at 30 frames/s at the clock rate of 100 MHz. PMID:23743775

  14. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  15. [Determination of Soluble Solid Content in Strawberry Using Hyperspectral Imaging Combined with Feature Extraction Methods].

    PubMed

    Ding, Xi-bin; Zhang, Chu; Liu, Fei; Song, Xing-lin; Kong, Wen-wen; He, Yong

    2015-04-01

    Hyperspectral imaging combined with feature extraction methods were applied to determine soluble sugar content (SSC) in mature and scatheless strawberry. Hyperspectral images of 154 strawberries covering the spectral range of 874-1,734 nm were captured and the spectral data were extracted from the hyperspectral images, and the spectra of 941~1,612 nm were preprocessed by moving average (MA). Nineteen samples were defined as outliers by the residual method, and the remaining 135 samples were divided into the calibration set (n = 90) and the prediction set (n = 45). Successive projections algorithm (SPA), genetic algorithm partial least squares (GAPLS) combined with SPA, weighted regression coefficient (Bw) and competitive adaptive reweighted sampling (CARS) were applied to select 14, 17, 24 and 25 effective wavelengths, respectively. Principal component analysis (PCA) and wavelet transform (WT) were applied to extract feature information with 20 and 58 features, respectively. PLS models were built based on the full spectra, the effective wavelengths and the features, respectively. All PLS models obtained good results. PLS models using full-spectra and features extracted by WT obtained the best results with correlation coefficient of calibration (r(c)) and correlation coefficient of prediction (r(p)) over 0.9. The overall results indicated that hyperspectral imaging combined with feature extraction methods could be used for detection of SSC in strawberry. PMID:26197594

  16. Automated blood vessel extraction using local features on retinal images

    NASA Astrophysics Data System (ADS)

    Hatanaka, Yuji; Samo, Kazuki; Tajima, Mikiya; Ogohara, Kazunori; Muramatsu, Chisako; Okumura, Susumu; Fujita, Hiroshi

    2016-03-01

    An automated blood vessel extraction using high-order local autocorrelation (HLAC) on retinal images is presented. Although many blood vessel extraction methods based on contrast have been proposed, a technique based on the relation of neighbor pixels has not been published. HLAC features are shift-invariant; therefore, we applied HLAC features to retinal images. However, HLAC features are weak to turned image, thus a method was improved by the addition of HLAC features to a polar transformed image. The blood vessels were classified using an artificial neural network (ANN) with HLAC features using 105 mask patterns as input. To improve performance, the second ANN (ANN2) was constructed by using the green component of the color retinal image and the four output values of ANN, Gabor filter, double-ring filter and black-top-hat transformation. The retinal images used in this study were obtained from the "Digital Retinal Images for Vessel Extraction" (DRIVE) database. The ANN using HLAC output apparent white values in the blood vessel regions and could also extract blood vessels with low contrast. The outputs were evaluated using the area under the curve (AUC) based on receiver operating characteristics (ROC) analysis. The AUC of ANN2 was 0.960 as a result of our study. The result can be used for the quantitative analysis of the blood vessels.

  17. On-line object feature extraction for multispectral scene representation

    NASA Technical Reports Server (NTRS)

    Ghassemian, Hassan; Landgrebe, David

    1988-01-01

    A new on-line unsupervised object-feature extraction method is presented that reduces the complexity and costs associated with the analysis of the multispectral image data and data transmission, storage, archival and distribution. The ambiguity in the object detection process can be reduced if the spatial dependencies, which exist among the adjacent pixels, are intelligently incorporated into the decision making process. The unity relation was defined that must exist among the pixels of an object. Automatic Multispectral Image Compaction Algorithm (AMICA) uses the within object pixel-feature gradient vector as a valuable contextual information to construct the object's features, which preserve the class separability information within the data. For on-line object extraction the path-hypothesis and the basic mathematical tools for its realization are introduced in terms of a specific similarity measure and adjacency relation. AMICA is applied to several sets of real image data, and the performance and reliability of features is evaluated.

  18. Image feature extraction based multiple ant colonies cooperation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  19. Feature extraction from multiple data sources using genetic programming.

    SciTech Connect

    Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.; Eads, D. R.; Galassi, M. C.; Harvey, N. R.; Perkins, S. J.; Porter, R. B.; Theiler, J. P.; Young, A. C.; Bloch, J. J.; David, N. A.; Esch-Mosher, D. M.

    2002-01-01

    Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.

  20. Features extraction in anterior and posterior cruciate ligaments analysis.

    PubMed

    Zarychta, P

    2015-12-01

    The main aim of this research is finding the feature vectors of the anterior and posterior cruciate ligaments (ACL and PCL). These feature vectors have to clearly define the ligaments structure and make it easier to diagnose them. Extraction of feature vectors is obtained by analysis of both anterior and posterior cruciate ligaments. This procedure is performed after the extraction process of both ligaments. In the first stage in order to reduce the area of analysis a region of interest including cruciate ligaments (CL) is outlined in order to reduce the area of analysis. In this case, the fuzzy C-means algorithm with median modification helping to reduce blurred edges has been implemented. After finding the region of interest (ROI), the fuzzy connectedness procedure is performed. This procedure permits to extract the anterior and posterior cruciate ligament structures. In the last stage, on the basis of the extracted anterior and posterior cruciate ligament structures, 3-dimensional models of the anterior and posterior cruciate ligament are built and the feature vectors created. This methodology has been implemented in MATLAB and tested on clinical T1-weighted magnetic resonance imaging (MRI) slices of the knee joint. The 3D display is based on the Visualization Toolkit (VTK).

  1. Genetic programming approach to extracting features from remotely sensed imagery

    SciTech Connect

    Theiler, J. P.; Perkins, S. J.; Harvey, N. R.; Szymanski, J. J.; Brumby, Steven P.

    2001-01-01

    Multi-instrument data sets present an interesting challenge to feature extraction algorithm developers. Beyond the immediate problems of spatial co-registration, the remote sensing scientist must explore a complex algorithm space in which both spatial and spectral signatures may be required to identify a feature of interest. We describe a genetic programming/supervised classifier software system, called Genie, which evolves and combines spatio-spectral image processing tools for remotely sensed imagery. We describe our representation of candidate image processing pipelines, and discuss our set of primitive image operators. Our primary application has been in the field of geospatial feature extraction, including wildfire scars and general land-cover classes, using publicly available multi-spectral imagery (MSI) and hyper-spectral imagery (HSI). Here, we demonstrate our system on Landsat 7 Enhanced Thematic Mapper (ETM+) MSI. We exhibit an evolved pipeline, and discuss its operation and performance.

  2. Feature Extraction and Selection From the Perspective of Explosive Detection

    SciTech Connect

    Sengupta, S K

    2009-09-01

    Features are extractable measurements from a sample image summarizing the information content in an image and in the process providing an essential tool in image understanding. In particular, they are useful for image classification into pre-defined classes or grouping a set of image samples (also called clustering) into clusters with similar within-cluster characteristics as defined by such features. At the lowest level, features may be the intensity levels of a pixel in an image. The intensity levels of the pixels in an image may be derived from a variety of sources. For example, it can be the temperature measurement (using an infra-red camera) of the area representing the pixel or the X-ray attenuation in a given volume element of a 3-d image or it may even represent the dielectric differential in a given volume element obtained from an MIR image. At a higher level, geometric descriptors of objects of interest in a scene may also be considered as features in the image. Examples of such features are: area, perimeter, aspect ratio and other shape features, or topological features like the number of connected components, the Euler number (the number of connected components less the number of 'holes'), etc. Occupying an intermediate level in the feature hierarchy are texture features which are typically derived from a group of pixels often in a suitably defined neighborhood of a pixel. These texture features are useful not only in classification but also in the segmentation of an image into different objects/regions of interest. At the present state of our investigation, we are engaged in the task of finding a set of features associated with an object under inspection ( typically a piece of luggage or a brief case) that will enable us to detect and characterize an explosive inside, when present. Our tool of inspection is an X-Ray device with provisions for computed tomography (CT) that generate one or more (depending on the number of energy levels used) digitized 3

  3. Feature extraction and classification algorithms for high dimensional data

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David

    1993-01-01

    Feature extraction and classification algorithms for high dimensional data are investigated. Developments with regard to sensors for Earth observation are moving in the direction of providing much higher dimensional multispectral imagery than is now possible. In analyzing such high dimensional data, processing time becomes an important factor. With large increases in dimensionality and the number of classes, processing time will increase significantly. To address this problem, a multistage classification scheme is proposed which reduces the processing time substantially by eliminating unlikely classes from further consideration at each stage. Several truncation criteria are developed and the relationship between thresholds and the error caused by the truncation is investigated. Next an approach to feature extraction for classification is proposed based directly on the decision boundaries. It is shown that all the features needed for classification can be extracted from decision boundaries. A characteristic of the proposed method arises by noting that only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is introduced. The proposed feature extraction algorithm has several desirable properties: it predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal means or equal covariances as some previous algorithms do. In addition, the decision boundary feature extraction algorithm can be used both for parametric and non-parametric classifiers. Finally, some problems encountered in analyzing high dimensional data are studied and possible solutions are proposed. First, the increased importance of the second order statistics in analyzing high dimensional data is recognized

  4. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  5. A Spiking Neural Network in sEMG Feature Extraction.

    PubMed

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  6. A Spiking Neural Network in sEMG Feature Extraction

    PubMed Central

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-01-01

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control. PMID:26540060

  7. A Review of Feature Selection and Feature Extraction Methods Applied on Microarray Data

    PubMed Central

    Hira, Zena M.; Gillies, Duncan F.

    2015-01-01

    We summarise various ways of performing dimensionality reduction on high-dimensional microarray data. Many different feature selection and feature extraction methods exist and they are being widely used. All these methods aim to remove redundant and irrelevant features so that classification of new instances will be more accurate. A popular source of data is microarrays, a biological platform for gathering gene expressions. Analysing microarrays can be difficult due to the size of the data they provide. In addition the complicated relations among the different genes make analysis more difficult and removing excess features can improve the quality of the results. We present some of the most popular methods for selecting significant features and provide a comparison between them. Their advantages and disadvantages are outlined in order to provide a clearer idea of when to use each one of them for saving computational time and resources. PMID:26170834

  8. A Transform-Based Feature Extraction Approach for Motor Imagery Tasks Classification

    PubMed Central

    Khorshidtalab, Aida; Mesbah, Mostefa; Salami, Momoh J. E.

    2015-01-01

    In this paper, we present a new motor imagery classification method in the context of electroencephalography (EEG)-based brain–computer interface (BCI). This method uses a signal-dependent orthogonal transform, referred to as linear prediction singular value decomposition (LP-SVD), for feature extraction. The transform defines the mapping as the left singular vectors of the LP coefficient filter impulse response matrix. Using a logistic tree-based model classifier; the extracted features are classified into one of four motor imagery movements. The proposed approach was first benchmarked against two related state-of-the-art feature extraction approaches, namely, discrete cosine transform (DCT) and adaptive autoregressive (AAR)-based methods. By achieving an accuracy of 67.35%, the LP-SVD approach outperformed the other approaches by large margins (25% compared with DCT and 6 % compared with AAR-based methods). To further improve the discriminatory capability of the extracted features and reduce the computational complexity, we enlarged the extracted feature subset by incorporating two extra features, namely, Q- and the Hotelling’s \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$T^{2}$ \\end{document} statistics of the transformed EEG and introduced a new EEG channel selection method. The performance of the EEG classification based on the expanded feature set and channel selection method was compared with that of a number of the state-of-the-art classification methods previously reported with the BCI IIIa competition data set. Our method came second with an average accuracy of 81.38%. PMID:27170898

  9. Eddy current pulsed phase thermography and feature extraction

    NASA Astrophysics Data System (ADS)

    He, Yunze; Tian, GuiYun; Pan, Mengchun; Chen, Dixiang

    2013-08-01

    This letter proposed an eddy current pulsed phase thermography technique combing eddy current excitation, infrared imaging, and phase analysis. One steel sample is selected as the material under test to avoid the influence of skin depth, which provides subsurface defects with different depths. The experimental results show that this proposed method can eliminate non-uniform heating and improve defect detectability. Several features are extracted from differential phase spectra and the preliminary linear relationships are built to measure these subsurface defects' depth.

  10. Automated Feature Extraction of Foredune Morphology from Terrestrial Lidar Data

    NASA Astrophysics Data System (ADS)

    Spore, N.; Brodie, K. L.; Swann, C.

    2014-12-01

    Foredune morphology is often described in storm impact prediction models using the elevation of the dune crest and dune toe and compared with maximum runup elevations to categorize the storm impact and predicted responses. However, these parameters do not account for other foredune features that may make them more or less erodible, such as alongshore variations in morphology, vegetation coverage, or compaction. The goal of this work is to identify other descriptive features that can be extracted from terrestrial lidar data that may affect the rate of dune erosion under wave attack. Daily, mobile-terrestrial lidar surveys were conducted during a 6-day nor'easter (Hs = 4 m in 6 m water depth) along 20km of coastline near Duck, North Carolina which encompassed a variety of foredune forms in close proximity to each other. This abstract will focus on the tools developed for the automated extraction of the morphological features from terrestrial lidar data, while the response of the dune will be presented by Brodie and Spore as an accompanying abstract. Raw point cloud data can be dense and is often under-utilized due to time and personnel constraints required for analysis, since many algorithms are not fully automated. In our approach, the point cloud is first projected into a local coordinate system aligned with the coastline, and then bare earth points are interpolated onto a rectilinear 0.5 m grid creating a high resolution digital elevation model. The surface is analyzed by identifying features along each cross-shore transect. Surface curvature is used to identify the position of the dune toe, and then beach and berm morphology is extracted shoreward of the dune toe, and foredune morphology is extracted landward of the dune toe. Changes in, and magnitudes of, cross-shore slope, curvature, and surface roughness are used to describe the foredune face and each cross-shore transect is then classified using its pre-storm morphology for storm-response analysis.

  11. Motion feature extraction scheme for content-based video retrieval

    NASA Astrophysics Data System (ADS)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  12. Feature extraction and integration for the quantification of PMFL data

    NASA Astrophysics Data System (ADS)

    Wilson, John W.; Kaba, Muma; Tian, Gui Yun; Licciardi, Steven

    2010-06-01

    If the vast networks of aging iron and steel, oil, gas and water pipelines are to be kept in operation, efficient and accurate pipeline inspection techniques are needed. Magnetic flux leakage (MFL) systems are widely used for ferromagnetic pipeline inspection and although MFL offers reasonable defect detection capabilities, characterisation of defects can be problematic and time consuming. The newly developed pulsed magnetic flux leakage (PMFL) system offers an inspection technique which equals the defect detection capabilities of traditional MFL, but also provides an opportunity to automatically extract defect characterisation information through analysis of the transient sections of the measured signals. In this paper internal and external defects in rolled steel water pipes are examined using PMFL, and feature extraction and integration techniques are explored to both provide defect depth information and to discriminate between internal and external defects. Feature combinations are recommended for defect characterisation and the paper concludes that PMFL can provide enhanced defect characterisation capabilities for flux leakage based inspection systems using feature extraction and integration.

  13. Automatic localization and feature extraction of white blood cells

    NASA Astrophysics Data System (ADS)

    Kovalev, Vassili A.; Grigoriev, Andrei Y.; Ahn, Hyo-Sok; Myshkin, Nickolai K.

    1995-05-01

    The paper presents a method for automatic localization and feature extraction of white blood cells (WBCs) with color images to develop an efficient automated WBC counting system based on image analysis and recognition. Nucleus blobs extraction consists of five steps: (1) nucleus pixel labeling; (2) filtration of nucleus pixel template; (3) segmentation and extraction of nucleus blobs by region growing; (4) removal of uninterested blobs; and (5) marking of external and internal blob border, and holes pixels. The detection of nucleus pixels is based on the intensity of the G image plane and the balance between G and B intensity. Localized nucleus segments are grouped into a cell nucleus by a hierarchic merging procedure in accordance with their area, shapes and conditions of their spatial occurrence. Cytoplasm segmentation based on the pixel intensity and color parameters is found to be unreliable. We overcome this problem by using an edge improving technique. WBC templates are then calculated and additional cell feature sets are constructed for the recognition. Cell feature sets include description of principal geometric and color properties for each type of WBCs. Finally we evaluate the recognition accuracy of the developed algorithm that is proved to be highly reliable and fast.

  14. Automated feature extraction for 3-dimensional point clouds

    NASA Astrophysics Data System (ADS)

    Magruder, Lori A.; Leigh, Holly W.; Soderlund, Alexander; Clymer, Bradley; Baer, Jessica; Neuenschwander, Amy L.

    2016-05-01

    Light detection and ranging (LIDAR) technology offers the capability to rapidly capture high-resolution, 3-dimensional surface data with centimeter-level accuracy for a large variety of applications. Due to the foliage-penetrating properties of LIDAR systems, these geospatial data sets can detect ground surfaces beneath trees, enabling the production of highfidelity bare earth elevation models. Precise characterization of the ground surface allows for identification of terrain and non-terrain points within the point cloud, and facilitates further discernment between natural and man-made objects based solely on structural aspects and relative neighboring parameterizations. A framework is presented here for automated extraction of natural and man-made features that does not rely on coincident ortho-imagery or point RGB attributes. The TEXAS (Terrain EXtraction And Segmentation) algorithm is used first to generate a bare earth surface from a lidar survey, which is then used to classify points as terrain or non-terrain. Further classifications are assigned at the point level by leveraging local spatial information. Similarly classed points are then clustered together into regions to identify individual features. Descriptions of the spatial attributes of each region are generated, resulting in the identification of individual tree locations, forest extents, building footprints, and 3-dimensional building shapes, among others. Results of the fully-automated feature extraction algorithm are then compared to ground truth to assess completeness and accuracy of the methodology.

  15. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure.

  16. Dual-pass feature extraction on human vessel images.

    PubMed

    Hernandez, W; Grimm, S; Andriantsimiavona, R

    2014-06-01

    We present a novel algorithm for the extraction of cavity features on images of human vessels. Fat deposits in the inner wall of such structure introduce artifacts, and regions in the images captured invalidating the usual assumption of an elliptical model which makes the process of extracting the central passage effectively more difficult. Our approach was designed to cope with these challenges and extract the required image features in a fully automated, accurate, and efficient way using two stages: the first allows to determine a bounding segmentation mask to prevent major leakages from pixels of the cavity area by using a circular region fill that operates as a paint brush followed by Principal Component Analysis with auto correction; the second allows to extract a precise cavity enclosure using a micro-dilation filter and an edge-walking scheme. The accuracy of the algorithm has been tested using 30 computed tomography angiography scans of the lower part of the body containing different degrees of inner wall distortion. The results were compared to manual annotations from a specialist resulting in sensitivity around 98 %, false positive rate around 8 %, and positive predictive value around 93 %. The average execution time was 24 and 18 ms on two types of commodity hardware over sections of 15 cm of length (approx. 1 ms per contour) which makes it more than suitable for use in interactive software applications. Reproducibility tests were also carried out with synthetic images showing no variation for the computed diameters against the theoretical measure. PMID:24197278

  17. Chemical-induced disease relation extraction with various linguistic features

    PubMed Central

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promising F-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL: https://github.com/JHnlp/BC5CIDTask PMID:27052618

  18. Chemical-induced disease relation extraction with various linguistic features.

    PubMed

    Gu, Jinghang; Qian, Longhua; Zhou, Guodong

    2016-01-01

    Understanding the relations between chemicals and diseases is crucial in various biomedical tasks such as new drug discoveries and new therapy developments. While manually mining these relations from the biomedical literature is costly and time-consuming, such a procedure is often difficult to keep up-to-date. To address these issues, the BioCreative-V community proposed a challenging task of automatic extraction of chemical-induced disease (CID) relations in order to benefit biocuration. This article describes our work on the CID relation extraction task on the BioCreative-V tasks. We built a machine learning based system that utilized simple yet effective linguistic features to extract relations with maximum entropy models. In addition to leveraging various features, the hypernym relations between entity concepts derived from the Medical Subject Headings (MeSH)-controlled vocabulary were also employed during both training and testing stages to obtain more accurate classification models and better extraction performance, respectively. We demoted relation extraction between entities in documents to relation extraction between entity mentions. In our system, pairs of chemical and disease mentions at both intra- and inter-sentence levels were first constructed as relation instances for training and testing, then two classification models at both levels were trained from the training examples and applied to the testing examples. Finally, we merged the classification results from mention level to document level to acquire final relations between chemicals and diseases. Our system achieved promisingF-scores of 60.4% on the development dataset and 58.3% on the test dataset using gold-standard entity annotations, respectively. Database URL:https://github.com/JHnlp/BC5CIDTask. PMID:27052618

  19. A flexible data-driven comorbidity feature extraction framework.

    PubMed

    Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-06-01

    Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. PMID:27127895

  20. Extracted facial feature of racial closely related faces

    NASA Astrophysics Data System (ADS)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  1. Semantic feature extraction for interior environment understanding and retrieval

    NASA Astrophysics Data System (ADS)

    Lei, Zhibin; Liang, Yufeng

    1998-12-01

    In this paper, we propose a novel system of semantic feature extraction and retrieval for interior design and decoration application. The system, V2ID(Virtual Visual Interior Design), uses colored texture and spatial edge layout to obtain simple information about global room environment. We address the domain-specific segmentation problem in our application and present techniques for obtaining semantic features from a room environment. We also discuss heuristics for making use of these features (color, texture, edge layout, and shape), to retrieve objects from an existing database. The final resynthesized room environment, with the original scene and objects from the database, is created for the purpose of animation and virtual walk-through.

  2. Harnessing Satellite Imageries in Feature Extraction Using Google Earth Pro

    NASA Astrophysics Data System (ADS)

    Fernandez, Sim Joseph; Milano, Alan

    2016-07-01

    Climate change has been a long-time concern worldwide. Impending flooding, for one, is among its unwanted consequences. The Phil-LiDAR 1 project of the Department of Science and Technology (DOST), Republic of the Philippines, has developed an early warning system in regards to flood hazards. The project utilizes the use of remote sensing technologies in determining the lives in probable dire danger by mapping and attributing building features using LiDAR dataset and satellite imageries. A free mapping software named Google Earth Pro (GEP) is used to load these satellite imageries as base maps. Geotagging of building features has been done so far with the use of handheld Global Positioning System (GPS). Alternatively, mapping and attribution of building features using GEP saves a substantial amount of resources such as manpower, time and budget. Accuracy-wise, geotagging by GEP is dependent on either the satellite imageries or orthophotograph images of half-meter resolution obtained during LiDAR acquisition and not on the GPS of three-meter accuracy. The attributed building features are overlain to the flood hazard map of Phil-LiDAR 1 in order to determine the exposed population. The building features as obtained from satellite imageries may not only be used in flood exposure assessment but may also be used in assessing other hazards and a number of other uses. Several other features may also be extracted from the satellite imageries.

  3. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  4. Magnetic Field Feature Extraction and Selection for Indoor Location Estimation

    PubMed Central

    Galván-Tejada, Carlos E.; García-Vázquez, Juan Pablo; Brena, Ramon F.

    2014-01-01

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios. PMID:24955944

  5. Launch vehicle payload adapter design with vibration isolation features

    NASA Astrophysics Data System (ADS)

    Thomas, Gareth R.; Fadick, Cynthia M.; Fram, Bryan J.

    2005-05-01

    Payloads, such as satellites or spacecraft, which are mounted on launch vehicles, are subject to severe vibrations during flight. These vibrations are induced by multiple sources that occur between liftoff and the instant of final separation from the launch vehicle. A direct result of the severe vibrations is that fatigue damage and failure can be incurred by sensitive payload components. For this reason a payload adapter has been designed with special emphasis on its vibration isolation characteristics. The design consists of an annular plate that has top and bottom face sheets separated by radial ribs and close-out rings. These components are manufactured from graphite epoxy composites to ensure a high stiffness to weight ratio. The design is tuned to keep the frequency of the axial mode of vibration of the payload on the flexibility of the adapter to a low value. This is the main strategy adopted for isolating the payload from damaging vibrations in the intermediate to higher frequency range (45Hz-200Hz). A design challenge for this type of adapter is to keep the pitch frequency of the payload above a critical value in order to avoid dynamic interactions with the launch vehicle control system. This high frequency requirement conflicts with the low axial mode frequency requirement and this problem is overcome by innovative tuning of the directional stiffnesses of the composite parts. A second design strategy that is utilized to achieve good isolation characteristics is the use of constrained layer damping. This feature is particularly effective at keeping the responses to a minimum for one of the most important dynamic loading mechanisms. This mechanism consists of the almost-tonal vibratory load associated with the resonant burn condition present in any stage powered by a solid rocket motor. The frequency of such a load typically falls in the 45-75Hz range and this phenomenon drives the low frequency design of the adapter. Detailed finite element analysis is

  6. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface.

    PubMed

    Matsubara, Takamitsu; Morimoto, Jun

    2013-08-01

    In this study, we propose a multiuser myoelectric interface that can easily adapt to novel users. When a user performs different motions (e.g., grasping and pinching), different electromyography (EMG) signals are measured. When different users perform the same motion (e.g., grasping), different EMG signals are also measured. Therefore, designing a myoelectric interface that can be used by multiple users to perform multiple motions is difficult. To cope with this problem, we propose for EMG signals a bilinear model that is composed of two linear factors: 1) user dependent and 2) motion dependent. By decomposing the EMG signals into these two factors, the extracted motion-dependent factors can be used as user-independent features. We can construct a motion classifier on the extracted feature space to develop the multiuser interface. For novel users, the proposed adaptation method estimates the user-dependent factor through only a few interactions. The bilinear EMG model with the estimated user-dependent factor can extract the user-independent features from the novel user data. We applied our proposed method to a recognition task of five hand gestures for robotic hand control using four-channel EMG signals measured from subject forearms. Our method resulted in 73% accuracy, which was statistically significantly different from the accuracy of standard nonmultiuser interfaces, as the result of a two-sample t -test at a significance level of 1%.

  7. Automatic extraction of initial moving object based on advanced feature and video analysis

    NASA Astrophysics Data System (ADS)

    Liu, Mao-Ying; Dai, Qiong-Hai; Liu, Xiao-Dong; Er, Gui-Hua

    2005-07-01

    Traditionally, video segmentation usually extracts object using low-level features such as color, texture, edge, motion, and optical flow. This paper originally proposes that the connectivity of object motion is an advanced feature of video moving object because it can reflect semantic meanings of object to some extent. And it can be fully represented on cumulated difference image which is the combination of a certain number of interframe difference images. Based on this principle, a novel system is designed to extract initial moving object automatically. The system includes 3 key innovations: 1) System is applied on cumulated difference image which can make object more prominent than background noise. Object extraction is based on the connectivity of object motion and it can guarantee the integrity of the extracted object while eliminate big background regions which cannot be removed by conventional change detection methods, for example, intense-noise regions and shadow regions that are not connected tightly to object. 2) Video sequence analysis is performed ahead of video segmentation. Proper object extraction methods are adopted according to the characteristics of background noise and object motion. 3) The adaptive threshold is automatically determined on cumulated difference image after acute noises is removed. The threshold determined here is more reasonable. And with it, most noise can be eliminated while small-motion regions of object are preserved. Results show that this system can extract object in different kinds of sequences automatically, promptly and properly. Thus, this system is very suitable for real time video applications.

  8. An Improved Approach of Mesh Segmentation to Extract Feature Regions.

    PubMed

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons.

  9. An Improved Approach of Mesh Segmentation to Extract Feature Regions

    PubMed Central

    Gu, Minghui; Duan, Liming; Wang, Maolin; Bai, Yang; Shao, Hui; Wang, Haoyu; Liu, Fenglin

    2015-01-01

    The objective of this paper is to extract concave and convex feature regions via segmenting surface mesh of a mechanical part whose surface geometry exhibits drastic variations and concave-convex features are equally important when modeling. Referring to the original approach based on the minima rule (MR) in cognitive science, we have created a revised minima rule (RMR) and presented an improved approach based on RMR in the paper. Using the logarithmic function in terms of the minimum curvatures that are normalized by the expectation and the standard deviation on the vertices of the mesh, we determined the solution formulas for the feature vertices according to RMR. Because only a small range of the threshold parameters was selected from in the determined formulas, an iterative process was implemented to realize the automatic selection of thresholds. Finally according to the obtained feature vertices, the feature edges and facets were obtained by growing neighbors. The improved approach overcomes the inherent inadequacies of the original approach for our objective in the paper, realizes full automation without setting parameters, and obtains better results compared with the latest conventional approaches. We demonstrated the feasibility and superiority of our approach by performing certain experimental comparisons. PMID:26436657

  10. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  11. Linear unmixing of hyperspectral signals via wavelet feature extraction

    NASA Astrophysics Data System (ADS)

    Li, Jiang

    A pixel in remotely sensed hyperspectral imagery is typically a mixture of multiple electromagnetic radiances from various ground cover materials. Spectral unmixing is a quantitative analysis procedure used to recognize constituent ground cover materials (or endmembers) and obtain their mixing proportions (or abundances) from a mixed pixel. The abundances are typically estimated using the least squares estimation (LSE) method based on the linear mixture model (LMM). This dissertation provides a complete investigation on how the use of appropriate features can improve the LSE of endmember abundances using remotely sensed hyperspectral signals. The dissertation shows how features based on signal classification approaches, such as discrete wavelet transform (DWT), outperform features based on conventional signal representation methods for dimensionality reduction, such as principal component analysis (PCA), for the LSE of endmember abundances. Both experimental and theoretical analyses are reported in the dissertation. A DWT-based linear unmixing system is designed specially for the abundance estimation. The system utilizes the DWT as a pre-processing step for the feature extraction. Based on DWT-based features, the system utilizes the constrained LSE for the abundance estimation. Experimental results show that the use of DWT-based features reduces the abundance estimation deviation by 30--50% on average, as compared to the use of original hyperspectral signals or conventional PCA-based features. Based on the LMM and the LSE method, a series of theoretical analyses are derived to reveal the fundamental reasons why the use of the appropriate features, such as DWT-based features, can improve the LSE of endmember abundances. Under reasonable assumptions, the dissertation derives a generalized mathematical relationship between the abundance estimation error and the endmember separabilty. It is proven that the abundance estimation error can be reduced through increasing

  12. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method. PMID:25868233

  13. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    PubMed

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  14. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    NASA Astrophysics Data System (ADS)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  15. Extract relevant features from DEM for groundwater potential mapping

    NASA Astrophysics Data System (ADS)

    Liu, T.; Yan, H.; Zhai, L.

    2015-06-01

    Multi-criteria evaluation (MCE) method has been applied much in groundwater potential mapping researches. But when to data scarce areas, it will encounter lots of problems due to limited data. Digital Elevation Model (DEM) is the digital representations of the topography, and has many applications in various fields. Former researches had been approved that much information concerned to groundwater potential mapping (such as geological features, terrain features, hydrology features, etc.) can be extracted from DEM data. This made using DEM data for groundwater potential mapping is feasible. In this research, one of the most widely used and also easy to access data in GIS, DEM data was used to extract information for groundwater potential mapping in batter river basin in Alberta, Canada. First five determining factors for potential ground water mapping were put forward based on previous studies (lineaments and lineament density, drainage networks and its density, topographic wetness index (TWI), relief and convergence Index (CI)). Extraction methods of the five determining factors from DEM were put forward and thematic maps were produced accordingly. Cumulative effects matrix was used for weight assignment, a multi-criteria evaluation process was carried out by ArcGIS software to delineate the potential groundwater map. The final groundwater potential map was divided into five categories, viz., non-potential, poor, moderate, good, and excellent zones. Eventually, the success rate curve was drawn and the area under curve (AUC) was figured out for validation. Validation result showed that the success rate of the model was 79% and approved the method's feasibility. The method afforded a new way for researches on groundwater management in areas suffers from data scarcity, and also broaden the application area of DEM data.

  16. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  17. Feature Extraction and Analysis of Breast Cancer Specimen

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Debnath; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    In this paper, we propose a method to identify abnormal growth of cells in breast tissue and suggest further pathological test, if necessary. We compare normal breast tissue with malignant invasive breast tissue by a series of image processing steps. Normal ductal epithelial cells and ductal / lobular invasive carcinogenic cells also consider for comparison here in this paper. In fact, features of cancerous breast tissue (invasive) are extracted and analyses with normal breast tissue. We also suggest the breast cancer recognition technique through image processing and prevention by controlling p53 gene mutation to some greater extent.

  18. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  19. Visual-adaptation-mechanism based underwater object extraction

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Wang, Huibin; Xu, Lizhong; Shen, Jie

    2014-03-01

    Due to the major obstacles originating from the strong light absorption and scattering in a dynamic underwater environment, underwater optical information acquisition and processing suffer from effects such as limited range, non-uniform lighting, low contrast, and diminished colors, causing it to become the bottleneck for marine scientific research and projects. After studying and generalizing the underwater biological visual mechanism, we explore its advantages in light adaption which helps animals to precisely sense the underwater scene and recognize their prey or enemies. Then, aiming to transform the significant advantage of the visual adaptation mechanism into underwater computer vision tasks, a novel knowledge-based information weighting fusion model is established for underwater object extraction. With this bionic model, the dynamical adaptability is given to the underwater object extraction task, making them more robust to the variability of the optical properties in different environments. The capability of the proposed method to adapt to the underwater optical environments is shown, and its outperformance for the object extraction is demonstrated by comparison experiments.

  20. Feature Space Mapping as a universal adaptive system

    NASA Astrophysics Data System (ADS)

    Duch, Włodzisław; Diercksen, Geerd H. F.

    1995-06-01

    The most popular realizations of adaptive systems are based on the neural network type of algorithms, in particular feedforward multilayered perceptrons trained by backpropagation of error procedures. In this paper an alternative approach based on multidimensional separable localized functions centered at the data clusters is proposed. In comparison with the neural networks that use delocalized transfer functions this approach allows for full control of the basins of attractors of all stationary points. Slow learning procedures are replaced by the explicit construction of the landscape function followed by the optimization of adjustable parameters using gradient techniques or genetic algorithms. Retrieving information does not require searches in multidimensional subspaces but it is factorized into a series of one-dimensional searches. Feature Space Mapping is applicable to learning not only from facts but also from general laws and may be treated as a fuzzy expert system (neurofuzzy system). The number of nodes (fuzzy rules) is growing as the network creates new nodes for novel data but the search time is sublinear in the number of rules or data clusters stored. Such a system may work as a universal classificator, approximator and reasoning system. Examples of applications for the identification of spectra (classification), intelligent databases (association) and for the analysis of simple electrical circuits (expert system type) are given.

  1. Road marking features extraction using the VIAPIX® system

    NASA Astrophysics Data System (ADS)

    Kaddah, W.; Ouerhani, Y.; Alfalou, A.; Desthieux, M.; Brosseau, C.; Gutierrez, C.

    2016-07-01

    Precise extraction of road marking features is a critical task for autonomous urban driving, augmented driver assistance, and robotics technologies. In this study, we consider an autonomous system allowing us lane detection for marked urban roads and analysis of their features. The task is to relate the georeferencing of road markings from images obtained using the VIAPIX® system. Based on inverse perspective mapping and color segmentation to detect all white objects existing on this road, the present algorithm enables us to examine these images automatically and rapidly and also to get information on road marks, their surface conditions, and their georeferencing. This algorithm allows detecting all road markings and identifying some of them by making use of a phase-only correlation filter (POF). We illustrate this algorithm and its robustness by applying it to a variety of relevant scenarios.

  2. Texture Feature Extraction and Classification for Iris Diagnosis

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Li, Naimin

    Appling computer aided techniques in iris image processing, and combining occidental iridology with the traditional Chinese medicine is a challenging research area in digital image processing and artificial intelligence. This paper proposes an iridology model that consists the iris image pre-processing, texture feature analysis and disease classification. To the pre-processing, a 2-step iris localization approach is proposed; a 2-D Gabor filter based texture analysis and a texture fractal dimension estimation method are proposed for pathological feature extraction; and at last support vector machines are constructed to recognize 2 typical diseases such as the alimentary canal disease and the nerve system disease. Experimental results show that the proposed iridology diagnosis model is quite effective and promising for medical diagnosis and health surveillance for both hospital and public use.

  3. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  4. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results.

  5. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    PubMed

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  6. Nonlinear feature extraction using kernel principal component analysis with non-negative pre-image.

    PubMed

    Kallas, Maya; Honeine, Paul; Richard, Cedric; Amoud, Hassan; Francis, Clovis

    2010-01-01

    The inherent physical characteristics of many real-life phenomena, including biological and physiological aspects, require adapted nonlinear tools. Moreover, the additive nature in some situations involve solutions expressed as positive combinations of data. In this paper, we propose a nonlinear feature extraction method, with a non-negativity constraint. To this end, the kernel principal component analysis is considered to define the most relevant features in the reproducing kernel Hilbert space. These features are the nonlinear principal components with high-order correlations between input variables. A pre-image technique is required to get back to the input space. With a non-negative constraint, we show that one can solve the pre-image problem efficiently, using a simple iterative scheme. Furthermore, the constrained solution contributes to the stability of the algorithm. Experimental results on event-related potentials (ERP) illustrate the efficiency of the proposed method.

  7. Automatic feature extraction in neural network noniterative learning

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Lun J.

    1997-04-01

    It is proved analytically, whenever the input-output mapping of a one-layered, hard-limited perceptron satisfies a positive, linear independency (PLI) condition, the connection matrix A to meet this mapping can be obtained noniteratively in one step from an algebraic matrix equation containing an N multiplied by M input matrix U. Each column of U is a given standard pattern vector, and there are M standard patterns to be classified. It is also analytically proved that sorting out all nonsingular sub-matrices Uk in U can be used as an automatic feature extraction process in this noniterative-learning system. This paper reports the theoretical derivation and the design and experiments of a superfast-learning, optimally robust, neural network pattern recognition system utilizing this novel feature extraction process. An unedited video movie showing the speed of learning and the robustness in recognition of this novel pattern recognition system is demonstrated in life. Comparison to other neural network pattern recognition systems is discussed.

  8. Wavelet based feature extraction and visualization in hyperspectral tissue characterization

    PubMed Central

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-01-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437

  9. Extraction of sandy bedforms features through geodesic morphometry

    NASA Astrophysics Data System (ADS)

    Debese, Nathalie; Jacq, Jean-José; Garlan, Thierry

    2016-09-01

    State-of-art echosounders reveal fine-scale details of mobile sandy bedforms, which are commonly found on continental shelfs. At present, their dynamics are still far from being completely understood. These bedforms are a serious threat to navigation security, anthropic structures and activities, placing emphasis on research breakthroughs. Bedform geometries and their dynamics are closely linked; therefore, one approach is to develop semi-automatic tools aiming at extracting their structural features from bathymetric datasets. Current approaches mimic manual processes or rely on morphological simplification of bedforms. The 1D and 2D approaches cannot address the wide ranges of both types and complexities of bedforms. In contrast, this work attempts to follow a 3D global semi-automatic approach based on a bathymetric TIN. The currently extracted primitives are the salient ridge and valley lines of the sand structures, i.e., waves and mega-ripples. The main difficulty is eliminating the ripples that are found to heavily overprint any observations. To this end, an anisotropic filter that is able to discard these structures while still enhancing the wave ridges is proposed. The second part of the work addresses the semi-automatic interactive extraction and 3D augmented display of the main lines structures. The proposed protocol also allows geoscientists to interactively insert topological constraints.

  10. Deep PDF parsing to extract features for detecting embedded malware.

    SciTech Connect

    Munson, Miles Arthur; Cross, Jesse S.

    2011-09-01

    The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benign and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout

  11. Feature extraction of kernel regress reconstruction for fault diagnosis based on self-organizing manifold learning

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Xu, Guanghua; Liu, Dan

    2013-09-01

    The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension. Currently, nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings, such as manifold learning. However, these methods are all based on manual intervention, which have some shortages in stability, and suppressing the disturbance noise. To extract features automatically, a manifold learning method with self-organization mapping is introduced for the first time. Under the non-uniform sample distribution reconstructed by the phase space, the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention. After that, the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation. Finally, the signal is reconstructed by the kernel regression. Several typical states include the Lorenz system, engine fault with piston pin defect, and bearing fault with outer-race defect are analyzed. Compared with the LTSA and continuous wavelet transform, the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified. A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.

  12. Transmission line icing prediction based on DWT feature extraction

    NASA Astrophysics Data System (ADS)

    Ma, T. N.; Niu, D. X.; Huang, Y. L.

    2016-08-01

    Transmission line icing prediction is the premise of ensuring the safe operation of the network as well as the very important basis for the prevention of freezing disasters. In order to improve the prediction accuracy of icing, a transmission line icing prediction model based on discrete wavelet transform (DWT) feature extraction was built. In this method, a group of high and low frequency signals were obtained by DWT decomposition, and were fitted and predicted by using partial least squares regression model (PLS) and wavelet least square support vector model (w-LSSVM). Finally, the final result of the icing prediction was obtained by adding the predicted values of the high and low frequency signals. The results showed that the method is effective and feasible in the prediction of transmission line icing.

  13. A Study of Feature Extraction Using Divergence Analysis of Texture Features

    NASA Technical Reports Server (NTRS)

    Hallada, W. A.; Bly, B. G.; Boyd, R. K.; Cox, S.

    1982-01-01

    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters.

  14. STATISTICAL BASED NON-LINEAR MODEL UPDATING USING FEATURE EXTRACTION

    SciTech Connect

    Schultz, J.F.; Hemez, F.M.

    2000-10-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the classical linear updating paradigm of utilizing the eigen-parameters or FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model, Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric, Finally an investigation of the model to replicate the measured response variation is examined.

  15. Texture features analysis for coastline extraction in remotely sensed images

    NASA Astrophysics Data System (ADS)

    De Laurentiis, Raimondo; Dellepiane, Silvana G.; Bo, Giancarlo

    2002-01-01

    The accurate knowledge of the shoreline position is of fundamental importance in several applications such as cartography and ships positioning1. Moreover, the coastline could be seen as a relevant parameter for the monitoring of the coastal zone morphology, as it allows the retrieval of a much more precise digital elevation model of the entire coastal area. The study that has been carried out focuses on the development of a reliable technique for the detection of coastlines in remotely sensed images. An innovative approach which is based on the concepts of fuzzy connectivity and texture features extraction has been developed for the location of the shoreline. The system has been tested on several kind of images as SPOT, LANDSAT and the results obtained are good. Moreover, the algorithm has been tested on a sample of a SAR interferogram. The breakthrough consists in the fact that the coastline detection is seen as an important features in the framework of digital elevation model (DEM) retrieval. In particular, the coast could be seen as a boundary line all data beyond which (the ones representing the sea) are not significant. The processing for the digital elevation model could be refined, just considering the in-land data.

  16. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements. PMID:25529700

  17. Pomegranate peel and peel extracts: chemistry and food features.

    PubMed

    Akhtar, Saeed; Ismail, Tariq; Fraternale, Daniele; Sestili, Piero

    2015-05-01

    The present review focuses on the nutritional, functional and anti-infective properties of pomegranate (Punica granatum L.) peel (PoP) and peel extract (PoPx) and on their applications as food additives, functional food ingredients or biologically active components in nutraceutical preparations. Due to their well-known ethnomedical relevance and chemical features, the biomolecules available in PoP and PoPx have been proposed, for instance, as substitutes of synthetic food additives, as nutraceuticals and chemopreventive agents. However, because of their astringency and anti-nutritional properties, PoP and PoPx are not yet considered as ingredients of choice in food systems. Indeed, considering the prospects related to both their health promoting activity and chemical features, the nutritional and nutraceutical potential of PoP and PoPx seems to be still underestimated. The present review meticulously covers the wide range of actual and possible applications (food preservatives, stabilizers, supplements, prebiotics and quality enhancers) of PoP and PoPx components in various food products. Given the overall properties of PoP and PoPx, further investigations in toxicological and sensory aspects of PoP and PoPx should be encouraged to fully exploit the health promoting and technical/economic potential of these waste materials as food supplements.

  18. Extraction of Molecular Features through Exome to Transcriptome Alignment

    PubMed Central

    Mudvari, Prakriti; Kowsari, Kamran; Cole, Charles; Mazumder, Raja; Horvath, Anelia

    2014-01-01

    Integrative Next Generation Sequencing (NGS) DNA and RNA analyses have very recently become feasible, and the published to date studies have discovered critical disease implicated pathways, and diagnostic and therapeutic targets. A growing number of exomes, genomes and transcriptomes from the same individual are quickly accumulating, providing unique venues for mechanistic and regulatory features analysis, and, at the same time, requiring new exploration strategies. In this study, we have integrated variation and expression information of four NGS datasets from the same individual: normal and tumor breast exomes and transcriptomes. Focusing on SNPcentered variant allelic prevalence, we illustrate analytical algorithms that can be applied to extract or validate potential regulatory elements, such as expression or growth advantage, imprinting, loss of heterozygosity (LOH), somatic changes, and RNA editing. In addition, we point to some critical elements that might bias the output and recommend alternative measures to maximize the confidence of findings. The need for such strategies is especially recognized within the growing appreciation of the concept of systems biology: integrative exploration of genome and transcriptome features reveal mechanistic and regulatory insights that reach far beyond linear addition of the individual datasets. PMID:24791251

  19. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  20. A Joint Time-Frequency and Matrix Decomposition Feature Extraction Methodology for Pathological Voice Classification

    NASA Astrophysics Data System (ADS)

    Ghoraani, Behnaz; Krishnan, Sridhar

    2009-12-01

    The number of people affected by speech problems is increasing as the modern world places increasing demands on the human voice via mobile telephones, voice recognition software, and interpersonal verbal communications. In this paper, we propose a novel methodology for automatic pattern classification of pathological voices. The main contribution of this paper is extraction of meaningful and unique features using Adaptive time-frequency distribution (TFD) and nonnegative matrix factorization (NMF). We construct Adaptive TFD as an effective signal analysis domain to dynamically track the nonstationarity in the speech and utilize NMF as a matrix decomposition (MD) technique to quantify the constructed TFD. The proposed method extracts meaningful and unique features from the joint TFD of the speech, and automatically identifies and measures the abnormality of the signal. Depending on the abnormality measure of each signal, we classify the signal into normal or pathological. The proposed method is applied on the Massachusetts Eye and Ear Infirmary (MEEI) voice disorders database which consists of 161 pathological and 51 normal speakers, and an overall classification accuracy of 98.6% was achieved.

  1. Feature extraction for change analysis in SAR time series

    NASA Astrophysics Data System (ADS)

    Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan

    2015-10-01

    In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information

  2. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    SciTech Connect

    Paganelli, Chiara; Peroni, Marta

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  3. Adaptive-filter/feature-orthogonalization processing string for optimal LLRT mine classfication in side-scan sonar imagery

    NASA Astrophysics Data System (ADS)

    Aridgides, Tom; Libera, Peter; Fernandez, Manuel F.; Dobeck, Gerald J.

    1996-05-01

    An automatic, robust, adaptive clutter suppression, mine detection and classification processing string has been developed and applied to side-scan sonar imagery data. The overall processing string includes data pre-processing, adaptive clutter filtering (ACF), 2D normalization, detection, feature extraction, and classification processing blocks. The data pre-processing block contains automatic gain control and data decimation processing. The ACF technique designs a 2D adaptive range-crossrange linear FIR filter which is optimal in the Least Squares sense, simultaneously suppressing the background clutter while preserving an average peak target signature (normalized shape) computed a priori using training set data. A multiple reference ACF algorithm version was utilized to account for multiple target shapes (due to different mine types, multiple target aspect angles, etc.). The detection block consists of thresholding, clustering of exceedances and limiting their number, and a secondary thresholding process. Following feature extraction, the classification block applies a novel transformation to the data, which orthogonalizes the features and enables an efficient application of the optimal log-likelihood-ratio-test (LLRT) classification rule. The utility of the overall processing string was demonstrated with two side-scan sonar data sets. The ACF/feature orthogonalization based LLRT mine classification processing string provided average probability of correct mine classification and false alarm rate performance similar to that obtained when utilizing an expert sonar operator.

  4. Computation as an emergent feature of adaptive synchronization

    NASA Astrophysics Data System (ADS)

    Zanin, M.; Papo, D.; Sendiña-Nadal, I.; Boccaletti, S.

    2011-12-01

    We report on the spontaneous emergence of computation from adaptive synchronization of networked dynamical systems. The fundamentals are nonlinear elements, interacting in a directed graph via a coupling that adapts itself to the synchronization level between two input signals. These units can emulate different Boolean logics, and perform any computational task in a Turing sense, each specific operation being associated with a given network's motif. The resilience of the computation against noise is proven, and the general applicability is demonstrated with regard to periodic and chaotic oscillators, and excitable systems mimicking neural dynamics.

  5. Dissociating Conflict Adaptation from Feature Integration: A Multiple Regression Approach

    ERIC Educational Resources Information Center

    Notebaert, Wim; Verguts, Tom

    2007-01-01

    Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on…

  6. Effects of face feature and contour crowding in facial expression adaptation.

    PubMed

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  7. Automated segmentation and feature extraction of product inspection items

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-03-01

    X-ray film and linescan images of pistachio nuts on conveyor trays for product inspection are considered. The final objective is the categorization of pistachios into good, blemished and infested nuts. A crucial step before classification is the separation of touching products and the extraction of features essential for classification. This paper addresses new detection and segmentation algorithms to isolate touching or overlapping items. These algorithms employ a new filter, a new watershed algorithm, and morphological processing to produce nutmeat-only images. Tests on a large database of x-ray film and real-time x-ray linescan images of around 2900 small, medium and large nuts showed excellent segmentation results. A new technique to detect and segment dark regions in nutmeat images is also presented and tested on approximately 300 x-ray film and approximately 300 real-time linescan x-ray images with 95-97 percent detection and correct segmentation. New algorithms are described that determine nutmeat fill ratio and locate splits in nutmeat. The techniques formulated in this paper are of general use in many different product inspection and computer vision problems.

  8. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  9. Information Theoretic Extraction of EEG Features for Monitoring Subject Attention

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2000-01-01

    The goal of this project was to test the applicability of information theoretic learning (feasibility study) to develop new brain computer interfaces (BCI). The difficulty to BCI comes from several aspects: (1) the effective data collection of signals related to cognition; (2) the preprocessing of these signals to extract the relevant information; (3) the pattern recognition methodology to detect reliably the signals related to cognitive states. We only addressed the two last aspects in this research. We started by evaluating an information theoretic measure of distance (Bhattacharyya distance) for BCI performance with good predictive results. We also compared several features to detect the presence of event related desynchronization (ERD) and synchronization (ERS), and concluded that at least for now the bandpass filtering is the best compromise between simplicity and performance. Finally, we implemented several classifiers for temporal - pattern recognition. We found out that the performance of temporal classifiers is superior to static classifiers but not by much. We conclude by stating that the future of BCI should be found in alternate approaches to sense, collect and process the signals created by populations of neurons. Towards this goal, cross-disciplinary teams of neuroscientists and engineers should be funded to approach BCIs from a much more principled view point.

  10. Extraction of text-related features for condensing image documents

    NASA Astrophysics Data System (ADS)

    Bloomberg, Dan S.; Chen, Francine R.

    1996-03-01

    A system has been built that selects excerpts from a scanned document for presentation as a summary, without using character recognition. The method relies on the idea that the most significant sentences in a document contain words that are both specific to the document and have a relatively high frequency of occurrence within it. Accordingly, and entirely within the image domain, each page image is deskewed and the text regions of are found and extracted as a set of textblocks. Blocks with font size near the median for the document are selected and then placed in reading order. The textlines and words are segmented, and the words are placed into equivalence classes of similar shape. The sentences are identified by finding baselines for each line of text and analyzing the size and location of the connected components relative to the baseline. Scores can then be given to each word, depending on its shape and frequency of occurrence, and to each sentence, depending on the scores for the words in the sentence. Other salient features, such as textblocks that have a large font or are likely to contain an abstract, can also be used to select image parts that are likely to be thematically relevant. The method has been applied to a variety of documents, including articles scanned from magazines and technical journals.

  11. Automated Tract Extraction via Atlas Based Adaptive Clustering

    PubMed Central

    Tunç, Birkan; Parker, William A.; Ingalhalikar, Madhura; Verma, Ragini

    2014-01-01

    Advancements in imaging protocols such as the high angular resolution diffusion-weighted imaging (HARDI) and in tractography techniques are expected to cause an increase in the tract-based analyses. Statistical analyses over white matter tracts can contribute greatly towards understanding structural mechanisms of the brain since tracts are representative of the connectivity pathways. The main challenge with tract-based studies is the extraction of the tracts of interest in a consistent and comparable manner over a large group of individuals without drawing the inclusion and exclusion regions of interest. In this work, we design a framework for automated extraction of white matter tracts. The framework introduces three main components, namely a connectivity based fiber representation, a fiber clustering atlas, and a clustering approach called Adaptive Clustering. The fiber representation relies on the connectivity signatures of fibers to establish an easy correspondence between different subjects. A group-wise clustering of these fibers that are represented by the connectivity signatures is then used to generate a fiber bundle atlas. Finally, Adaptive Clustering incorporates the previously generated clustering atlas as a prior, to cluster the fibers of a new subject automatically. Experiments on the HARDI scans of healthy individuals acquired repeatedly, demonstrate the applicability, the reliability and the repeatability of our approach in extracting white matter tracts. By alleviating the seed region selection or the inclusion/exclusion ROI drawing requirements that are usually handled by trained radiologists, the proposed framework expands the range of possible clinical applications and establishes the ability to perform tract-based analyses with large samples. PMID:25134977

  12. Efficient integration of spectral features for vehicle tracking utilizing an adaptive sensor

    NASA Astrophysics Data System (ADS)

    Uzkent, Burak; Hoffman, Matthew J.; Vodacek, Anthony

    2015-03-01

    Object tracking in urban environments is an important and challenging problem that is traditionally tackled using visible and near infrared wavelengths. By inserting extended data such as spectral features of the objects one can improve the reliability of the identification process. However, huge increase in data created by hyperspectral imaging is usually prohibitive. To overcome the complexity problem, we propose a persistent air-to-ground target tracking system inspired by a state-of-the-art, adaptive, multi-modal sensor. The adaptive sensor is capable of providing panchromatic images as well as the spectra of desired pixels. This addresses the data challenge of hyperspectral tracking by only recording spectral data as needed. Spectral likelihoods are integrated into a data association algorithm in a Bayesian fashion to minimize the likelihood of misidentification. A framework for controlling spectral data collection is developed by incorporating motion segmentation information and prior information from a Gaussian Sum filter (GSF) movement predictions from a multi-model forecasting set. An intersection mask of the surveillance area is extracted from OpenStreetMap source and incorporated into the tracking algorithm to perform online refinement of multiple model set. The proposed system is tested using challenging and realistic scenarios generated in an adverse environment.

  13. Improving Naive Bayes with Online Feature Selection for Quick Adaptation to Evolving Feature Usefulness

    SciTech Connect

    Pon, R K; Cardenas, A F; Buttler, D J

    2007-09-19

    The definition of what makes an article interesting varies from user to user and continually evolves even for a single user. As a result, for news recommendation systems, useless document features can not be determined a priori and all features are usually considered for interestingness classification. Consequently, the presence of currently useless features degrades classification performance [1], particularly over the initial set of news articles being classified. The initial set of document is critical for a user when considering which particular news recommendation system to adopt. To address these problems, we introduce an improved version of the naive Bayes classifier with online feature selection. We use correlation to determine the utility of each feature and take advantage of the conditional independence assumption used by naive Bayes for online feature selection and classification. The augmented naive Bayes classifier performs 28% better than the traditional naive Bayes classifier in recommending news articles from the Yahoo! RSS feeds.

  14. [Classification technique for hyperspectral image based on subspace of bands feature extraction and LS-SVM].

    PubMed

    Gao, Heng-zhen; Wan, Jian-wei; Zhu, Zhen-zhen; Wang, Li-bao; Nian, Yong-jian

    2011-05-01

    The present paper proposes a novel hyperspectral image classification algorithm based on LS-SVM (least squares support vector machine). The LS-SVM uses the features extracted from subspace of bands (SOB). The maximum noise fraction (MNF) method is adopted as the feature extraction method. The spectral correlations of the hyperspectral image are used in order to divide the feature space into several SOBs. Then the MNF is used to extract characteristic features of the SOBs. The extracted features are combined into the feature vector for classification. So the strong bands correlation is avoided and the spectral redundancies are reduced. The LS-SVM classifier is adopted, which replaces inequality constraints in SVM by equality constraints. So the computation consumption is reduced and the learning performance is improved. The proposed method optimizes spectral information by feature extraction and reduces the spectral noise. The classifier performance is improved. Experimental results show the superiorities of the proposed algorithm.

  15. Automatic adaptive parameterization in local phase feature-based bone segmentation in ultrasound.

    PubMed

    Hacihaliloglu, Ilker; Abugharbieh, Rafeef; Hodgson, Antony J; Rohling, Robert N

    2011-10-01

    Intensity-invariant local phase features based on Log-Gabor filters have been recently shown to produce highly accurate localizations of bone surfaces from three-dimensional (3-D) ultrasound. A key challenge, however, remains in the proper selection of filter parameters, whose values have so far been chosen empirically and kept fixed for a given image. Since Log-Gabor filter responses widely change when varying the filter parameters, actual parameter selection can significantly affect the quality of extracted features. This article presents a novel method for contextual parameter selection that autonomously adapts to image content. Our technique automatically selects the scale, bandwidth and orientation parameters of Log-Gabor filters for optimizing local phase symmetry. The proposed approach incorporates principle curvature computed from the Hessian matrix and directional filter banks in a phase scale-space framework. Evaluations performed on carefully designed in vitro experiments demonstrate 35% improvement in accuracy of bone surface localization compared with empirically-set parameterization results. Results from a pilot in vivo study on human subjects, scanned in the operating room, show similar improvements.

  16. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  17. PyEEG: an open source Python module for EEG/MEG feature extraction.

    PubMed

    Bao, Forrest Sheng; Liu, Xin; Zhang, Christina

    2011-01-01

    Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.

  18. Autistic features, personality, and adaptive behavior in males with the fragile X syndrome and no autism.

    PubMed

    Kerby, D S; Dawson, B L

    1994-01-01

    Nine males with mental retardation due to fragile X syndrome were compared to 9 males with mental retardation of other etiology. Subjects were compared on measures of personality, autistic features, and adaptive behavior. Results suggested that males with fragile X syndrome have a distinct psychological profile. In terms of DSM-III-R definitions, they had more autistic features, more schizoid features, and more schizotypal features. On measures of temperament, these males were more shy, more socially withdrawn, less energetic, and more emotional. The two groups did not differ with respect to adaptive behavior skills.

  19. Estimation and Extraction of Radar Signal Features Using Modified B Distribution and Particle Filters

    NASA Astrophysics Data System (ADS)

    Mikluc, Davorin; Bujaković, Dimitrije; Andrić, Milenko; Simić, Slobodan

    2016-09-01

    The research analyses the application of particle filters in estimating and extracting the features of radar signal time-frequency energy distribution. Time-frequency representation is calculated using modified B distribution, where the estimation process model represents one time bin. An adaptive criterion for the calculation of particle weighted coefficients whose main parameters are frequency integral squared error and estimated maximum of mean power spectral density per one time bin is proposed. The analysis of the suggested estimation application has been performed on a generated signal in the absence of any noise, and consequently on modelled and recorded real radar signals. The advantage of the suggested method is in the solution of the issue of interrupted estimations of instantaneous frequencies which appears when these estimations are determined according to maximum energy distribution, as in the case of intersecting frequency components in a multicomponent signal.

  20. Computerized-Adaptive and Self-Adapted Music-Listening Tests: Psychometric Features and Motivational Benefits.

    ERIC Educational Resources Information Center

    Vispoel, Walter P.; Coffman, Don D.

    1994-01-01

    Computerized-adaptive (CAT) and self-adapted (SAT) music listening tests were compared for efficiency, reliability, validity, and motivational benefits with 53 junior high school students. Results demonstrate trade-offs, with greater potential motivational benefits for SAT and greater efficiency for CAT. SAT elicited more favorable responses from…

  1. Feature edge extraction from 3D triangular meshes using a thinning algorithm

    NASA Astrophysics Data System (ADS)

    Nomura, Masaru; Hamada, Nozomu

    2001-11-01

    Highly detailed geometric models, which are represented as dense triangular meshes are becoming popular in computer graphics. Since such 3D meshes often have huge information, we require some methods to treat them efficiently in the 3D mesh processing such as, surface simplification, subdivision surface, curved surface approximation and morphing. In these applications, we often extract features of 3D meshes such as feature vertices and feature edges in preprocessing step. An automatic extraction method of feature edges is treated in this study. In order to realize the feature edge extraction method, we first introduce the concavity and convexity evaluation value. Then the histogram of the concavity and convexity evaluation value is used to separate the feature edge region. We apply a thinning algorithm, which is used in 2D binary image processing. It is shown that the proposed method can extract appropriate feature edges from 3D meshes.

  2. Study of Facial Features Combination Using a Novel Adaptive Fuzzy Integral Fusion Model

    NASA Astrophysics Data System (ADS)

    Ardakani, M. Mahdi Ghazaei; Shokouhi, Shahriar Baradaran

    A new adaptive model based on fuzzy integrals has been presented and used for combining three well-known methods, Eigenface, Fisherface and SOMface, for face classification. After training the competence estimation functions, the adaptive mechanism enables our system the filtering of unsure judgments of classifiers for a specific input. Comparison with classical and non-adaptive approaches proves the superiority of this model. Also we examined how these features contribute to the combined result and whether they can together establish a more robust feature.

  3. Extraction of shoreline features by neural nets and image processing

    SciTech Connect

    Ryan, T.W.; Sementilli, P.J.; Yuen, P.; Hunt, B.R. )

    1991-07-01

    This paper demonstrates the capability of using neural networks as a tool for delineation of shorelines. The neural nets used are multilayer perceptrons, i.e., feed-forward nets with one or more layers of nodes between the input and output nodes. The back-propagation learning algorithm is used as the adaptation rule. 24 refs.

  4. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  5. Comparison of half and full-leaf shape feature extraction for leaf classification

    NASA Astrophysics Data System (ADS)

    Sainin, Mohd Shamrie; Ahmad, Faudziah; Alfred, Rayner

    2016-08-01

    Shape is the main information for leaf feature that most of the current literatures in leaf identification utilize the whole leaf for feature extraction and to be used in the leaf identification process. In this paper, study of half-leaf features extraction for leaf identification is carried out and the results are compared with the results obtained from the leaf identification based on a full-leaf features extraction. Identification and classification is based on shape features that are represented as cosines and sinus angles. Six single classifiers obtained from WEKA and seven ensemble methods are used to compare their performance accuracies over this data. The classifiers were trained using 65 leaves in order to classify 5 different species of preliminary collection of Malaysian medicinal plants. The result shows that half-leaf features extraction can be used for leaf identification without decreasing the predictive accuracy.

  6. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  7. Stent enhancement in digital x-ray fluoroscopy using an adaptive feature enhancement filter

    NASA Astrophysics Data System (ADS)

    Jiang, Yuhao; Zachary, Josey

    2016-03-01

    Fluoroscopic images belong to the classes of low contrast and high noise. Simply lowering radiation dose will render the images unreadable. Feature enhancement filters can reduce patient dose by acquiring images at low dose settings and then digitally restoring them to the original quality. In this study, a stent contrast enhancement filter is developed to selectively improve the contrast of stent contour without dramatically boosting the image noise including quantum noise and clinical background noise. Gabor directional filter banks are implemented to detect the edges and orientations of the stent. A high orientation resolution of 9° is used. To optimize the use of the information obtained from Gabor filters, a computerized Monte Carlo simulation followed by ROC study is used to find the best nonlinear operator. The next stage of filtering process is to extract symmetrical parts in the stent. The global and local symmetry measures are used. The information gathered from previous two filter stages are used to generate a stent contour map. The contour map is then scaled and added back to the original image to get a contrast enhanced stent image. We also apply a spatio-temporal channelized Hotelling observer model and other numerical measures to characterize the response of the filters and contour map to optimize the selections of parameters for image quality. The results are compared to those filtered by an adaptive unsharp masking filter previously developed. It is shown that stent enhancement filter can effectively improve the stent detection and differentiation in the interventional fluoroscopy.

  8. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  9. Comparisom of Wavelet-Based and Hht-Based Feature Extraction Methods for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, X.-M.; Hsu, P.-H.

    2012-07-01

    Hyperspectral images, which contain rich and fine spectral information, can be used to identify surface objects and improve land use/cover classification accuracy. Due to the property of high dimensionality of hyperspectral data, traditional statistics-based classifiers cannot be directly used on such images with limited training samples. This problem is referred as "curse of dimensionality". The commonly used method to solve this problem is dimensionality reduction, and feature extraction is used to reduce the dimensionality of hyperspectral images more frequently. There are two types of feature extraction methods. The first type is based on statistical property of data. The other type is based on time-frequency analysis. In this study, the time-frequency analysis methods are used to extract the features for hyperspectral image classification. Firstly, it has been proven that wavelet-based feature extraction provide an effective tool for spectral feature extraction. On the other hand, Hilbert-Huang transform (HHT), a relative new time-frequency analysis tool, has been widely used in nonlinear and nonstationary data analysis. In this study, wavelet transform and HHT are implemented on the hyperspectral data for physical spectral analysis. Therefore, we can get a small number of salient features, reduce the dimensionality of hyperspectral images and keep the accuracy of classification results. An AVIRIS data set is used to test the performance of the proposed HHT-based feature extraction methods; then, the results are compared with wavelet-based feature extraction. According to the experiment results, HHT-based feature extraction methods are effective tools and the results are similar with wavelet-based feature extraction methods.

  10. PROCESSING OF SCANNED IMAGERY FOR CARTOGRAPHIC FEATURE EXTRACTION.

    USGS Publications Warehouse

    Benjamin, Susan P.; Gaydos, Leonard

    1984-01-01

    Digital cartographic data are usually captured by manually digitizing a map or an interpreted photograph or by automatically scanning a map. Both techniques first require manual photointerpretation to describe features of interest. A new approach, bypassing the laborious photointerpretation phase, is being explored using direct digital image analysis. Aerial photographs are scanned and color separated to create raster data. These are then enhanced and classified using several techniques to identify roads and buildings. Finally, the raster representation of these features is refined and vectorized. 11 refs.

  11. Extraction of terrain features from digital elevation models

    USGS Publications Warehouse

    Price, Curtis V.; Wolock, David M.; Ayers, Mark A.

    1989-01-01

    Digital elevation models (DEMs) are being used to determine variable inputs for hydrologic models in the Delaware River basin. Recently developed software for analysis of DEMs has been applied to watershed and streamline delineation. The results compare favorably with similar delineations taken from topographic maps. Additionally, output from this software has been used to extract other hydrologic information from the DEM, including flow direction, channel location, and an index describing the slope and shape of a watershed.

  12. Forest classification using extracted PolSAR features from Compact Polarimetry data

    NASA Astrophysics Data System (ADS)

    Aghabalaei, Amir; Maghsoudi, Yasser; Ebadi, Hamid

    2016-05-01

    This study investigates the ability of extracted Polarimetric Synthetic Aperture RADAR (PolSAR) features from Compact Polarimetry (CP) data for forest classification. The CP is a new mode that is recently proposed in Dual Polarimetry (DP) imaging system. It has several important advantages in comparison with Full Polarimetry (FP) mode such as reduction ability in complexity, cost, mass, data rate of a SAR system. Two strategies are employed for PolSAR feature extraction. In first strategy, the features are extracted using 2 × 2 covariance matrices of CP modes simulated by RADARSAT-2 C-band FP mode. In second strategy, they are extracted using 3 × 3 covariance matrices reconstructed from the CP modes called Pseudo Quad (PQ) modes. In each strategy, the extracted PolSAR features are combined and optimal features are selected by Genetic Algorithm (GA) and then a Support Vector Machine (SVM) classifier is applied. Finally, the results are compared with the FP mode. Results of this study show that the PolSAR features extracted from π / 4 CP mode, as well as combining the PolSAR features extracted from CP or PQ modes provide a better overall accuracy in classification of forest.

  13. Bispectrum-based feature extraction technique for devising a practical brain-computer interface

    NASA Astrophysics Data System (ADS)

    Shahid, Shahjahan; Prasad, Girijesh

    2011-04-01

    The extraction of distinctly separable features from electroencephalogram (EEG) is one of the main challenges in designing a brain-computer interface (BCI). Existing feature extraction techniques for a BCI are mostly developed based on traditional signal processing techniques assuming that the signal is Gaussian and has linear characteristics. But the motor imagery (MI)-related EEG signals are highly non-Gaussian, non-stationary and have nonlinear dynamic characteristics. This paper proposes an advanced, robust but simple feature extraction technique for a MI-related BCI. The technique uses one of the higher order statistics methods, the bispectrum, and extracts the features of nonlinear interactions over several frequency components in MI-related EEG signals. Along with a linear discriminant analysis classifier, the proposed technique has been used to design an MI-based BCI. Three performance measures, classification accuracy, mutual information and Cohen's kappa have been evaluated and compared with a BCI using a contemporary power spectral density-based feature extraction technique. It is observed that the proposed technique extracts nearly recording-session-independent distinct features resulting in significantly much higher and consistent MI task detection accuracy and Cohen's kappa. It is therefore concluded that the bispectrum-based feature extraction is a promising technique for detecting different brain states.

  14. Feature extraction and segmentation in medical images by statistical optimization and point operation approaches

    NASA Astrophysics Data System (ADS)

    Yang, Shuyu; King, Philip; Corona, Enrique; Wilson, Mark P.; Aydin, Kaan; Mitra, Sunanda; Soliz, Peter; Nutter, Brian S.; Kwon, Young H.

    2003-05-01

    Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.

  15. Pattern representation in feature extraction and classifier design: matrix versus vector.

    PubMed

    Wang, Zhe; Chen, Songcan; Liu, Jun; Zhang, Daoqiang

    2008-05-01

    The matrix, as an extended pattern representation to the vector, has proven to be effective in feature extraction. However, the subsequent classifier following the matrix-pattern- oriented feature extraction is generally still based on the vector pattern representation (namely, MatFE + VecCD), where it has been demonstrated that the effectiveness in classification just attributes to the matrix representation in feature extraction. This paper looks at the possibility of applying the matrix pattern representation to both feature extraction and classifier design. To this end, we propose a so-called fully matrixized approach, i.e., the matrix-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (MatFE + MatCD). To more comprehensively validate MatFE + MatCD, we further consider all the possible combinations of feature extraction (FE) and classifier design (CD) on the basis of patterns represented by matrix and vector respectively, i.e., MatFE + MatCD, MatFE + VecCD, just the matrix-pattern-oriented classifier design (MatCD), the vector-pattern-oriented feature extraction followed by the matrix-pattern-oriented classifier design (VecFE + MatCD), the vector-pattern-oriented feature extraction followed by the vector-pattern-oriented classifier design (VecFE + VecCD) and just the vector-pattern-oriented classifier design (VecCD). The experiments on the combinations have shown the following: 1) the designed fully matrixized approach (MatFE + MatCD) has an effective and efficient performance on those patterns with the prior structural knowledge such as images; and 2) the matrix gives us an alternative feasible pattern representation in feature extraction and classifier designs, and meanwhile provides a necessary validation for "ugly duckling" and "no free lunch" theorems.

  16. Biosensor method and system based on feature vector extraction

    SciTech Connect

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  17. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  18. Semantic Control of Feature Extraction from Natural Scenes

    PubMed Central

    2014-01-01

    In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm. We developed a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback. We characterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. PMID:24501376

  19. A Model for Extracting Personal Features of an Electroencephalogram and Its Evaluation Method

    NASA Astrophysics Data System (ADS)

    Ito, Shin-Ichi; Mitsukura, Yasue; Fukumi, Minoru

    This paper introduces a model for extracting features of an electroencephalogram (EEG) and a method for evaluating the model. In general, it is known that an EEG contains personal features. However, extraction of these personal features has not been reported. The analyzed frequency components of an EEG can be classified as the components that contain significant number of features and the ones that do not contain any. From the viewpoint of these feature differences, we propose the model for extracting features of the EEG. The model assumes a latent structure and employs factor analysis by considering the model error as personal error. We consider the EEG feature as a first factor loading, which is calculated by eigenvalue decomposition. Furthermore, we use a k-nearest neighbor (kNN) algorithm for evaluating the proposed model and extracted EEG features. In general, the distance metric used is Euclidean distance. We believe that the distance metric used depends on the characteristic of the extracted EEG feature and on the subject. Therefore, depending on the subject, we use one of the three distance metrics: Euclidean distance, cosine distance, and correlation coefficient. Finally, in order to show the effectiveness of the proposed model, we perform a computer simulation using real EEG data.

  20. Feature Extraction for Mental Fatigue and Relaxation States Based on Systematic Evaluation Considering Individual Difference

    NASA Astrophysics Data System (ADS)

    Chen, Lanlan; Sugi, Takenao; Shirakawa, Shuichiro; Zou, Junzhong; Nakamura, Masatoshi

    Feature extraction for mental fatigue and relaxation states is helpful to understand the mechanisms of mental fatigue and search effective relaxation technique in sustained work environments. Experiment data of human states are often affected by external and internal factors, which increase the difficulties to extract common features. The aim of this study is to explore appropriate methods to eliminate individual difference and enhance common features. Mental fatigue and relaxation experiments are executed on 12 subjects. An integrated and evaluation system is proposed, which consists of subjective evaluation (visual analogue scale), calculation performance and neurophysiological signals especially EEG signals. With consideration of individual difference, the common features of multi-estimators testify the effectiveness of relaxation in sustained mental work. Relaxation technique can be practically applied to prevent accumulation of mental fatigue and keep mental health. The proposed feature extraction methods are widely applicable to obtain common features and release the restriction for subjection selection and experiment design.

  1. The influence of negative stimulus features on conflict adaption: evidence from fluency of processing

    PubMed Central

    Fritz, Julia; Fischer, Rico; Dreisbach, Gesine

    2015-01-01

    Cognitive control enables adaptive behavior in a dynamically changing environment. In this context, one prominent adaptation effect is the sequential conflict adjustment, i.e., the observation of reduced response interference on trials following conflict trials. Increasing evidence suggests that such response conflicts are registered as aversive signals. So far, however, the functional role of this aversive signal for conflict adaptation to occur has not been put to test directly. In two experiments, the affective valence of conflict stimuli was manipulated by fluency of processing (stimulus contrast). Experiment 1 used a flanker interference task, Experiment 2 a color-word Stroop task. In both experiments, conflict adaptation effects were only present in fluent, but absent in disfluent trials. Results thus speak against the simple idea that any aversive stimulus feature is suited to promote specific conflict adjustments. Two alternative but not mutually exclusive accounts, namely resource competition and adaptation-by-motivation, will be discussed. PMID:25767453

  2. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    NASA Astrophysics Data System (ADS)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  3. Texture feature extraction and analysis for polyp differentiation via computed tomography colonography

    PubMed Central

    Hu, Yifan; Song, Bowen; Han, Hao; Pickhardt, Perry J.; Zhu, Wei; Duan, Chaijie; Zhang, Hao; Barish, Matthew A.; Lascarides, Chris E.

    2016-01-01

    Image textures in computed tomography colonography (CTC) have great potential for differentiating non-neoplastic from neoplastic polyps and thus can advance the current CTC detection-only paradigm to a new level toward optimal polyp management to prevent the deadly colorectal cancer. However, image textures are frequently compromised due to noise smoothing and other error-correction operations in most CT image reconstructions. Furthermore, because of polyp orientation variation in patient space, texture features extracted in that space can vary accordingly, resulting in variable results. To address these issues, this study proposes an adaptive approach to extract and analyze the texture features for polyp differentiation. Firstly, derivative operations are performed on the CT intensity image to amplify the textures, e.g. in the 1st order derivative (gradient) and 2nd order derivative (curvature) images, with adequate noise control. Then the Haralick co-occurrence matrix (CM) is used to calculate texture measures along each of the 13 directions (defined by the 1st and 2nd order image voxel neighbors) through the polyp volume in the intensity, gradient and curvature images. Instead of taking the mean and range of each CM measure over the 13 directions as the so-called Haralick texture features, the Karhunen-Loeve transform is performed to map the 13 directions into an orthogonal coordinate system where all the CM measures are projected onto the new coordinates so that the resulted texture features are less dependent on the polyp spatial orientation variation. While the ideas for amplifying textures and stabilizing spatial variation are simple, their impacts are significant for the task of differentiating non-neoplastic from neoplastic polyps as demonstrated by experiments using 384 polyp datasets, of which 52 are non-neoplastic polyps and the rest are neoplastic polyps. By the merit of area under the curve of receiver operating characteristic, the innovative ideas

  4. Hierarchical image feature extraction by an irregular pyramid of polygonal partitions

    SciTech Connect

    Skurikhin, Alexei N

    2008-01-01

    We present an algorithmic framework for hierarchical image segmentation and feature extraction. We build a successive fine-to-coarse hierarchy of irregular polygonal partitions of the original image. This multiscale hierarchy forms the basis for object-oriented image analysis. The framework incorporates the Gestalt principles of visual perception, such as proximity and closure, and exploits spectral and textural similarities of polygonal partitions, while iteratively grouping them until dissimilarity criteria are exceeded. Seed polygons are built upon a triangular mesh composed of irregular sized triangles, whose spatial arrangement is adapted to the image content. This is achieved by building the triangular mesh on the top of detected spectral discontinuities (such as edges), which form a network of constraints for the Delaunay triangulation. The image is then represented as a spatial network in the form of a graph with vertices corresponding to the polygonal partitions and edges reflecting their relations. The iterative agglomeration of partitions into object-oriented segments is formulated as Minimum Spanning Tree (MST) construction. An important characteristic of the approach is that the agglomeration of polygonal partitions is constrained by the detected edges; thus the shapes of agglomerated partitions are more likely to correspond to the outlines of real-world objects. The constructed partitions and their spatial relations are characterized using spectral, textural and structural features based on proximity graphs. The framework allows searching for object-oriented features of interest across multiple levels of details of the built hierarchy and can be generalized to the multi-criteria MST to account for multiple criteria important for an application.

  5. Geometric feature extraction by a multimarked point process.

    PubMed

    Lafarge, Florent; Gimel'farb, Georgy; Descombes, Xavier

    2010-09-01

    This paper presents a new stochastic marked point process for describing images in terms of a finite library of geometric objects. Image analysis based on conventional marked point processes has already produced convincing results but at the expense of parameter tuning, computing time, and model specificity. Our more general multimarked point process has simpler parametric setting, yields notably shorter computing times, and can be applied to a variety of applications. Both linear and areal primitives extracted from a library of geometric objects are matched to a given image using a probabilistic Gibbs model, and a Jump-Diffusion process is performed to search for the optimal object configuration. Experiments with remotely sensed images and natural textures show that the proposed approach has good potential. We conclude with a discussion about the insertion of more complex object interactions in the model by studying the compromise between model complexity and efficiency.

  6. Comparison study of feature extraction methods in structural damage pattern recognition

    NASA Astrophysics Data System (ADS)

    Liu, Wenjia; Chen, Bo; Swartz, R. Andrew

    2011-04-01

    This paper compares the performance of various feature extraction methods applied to structural sensor measurements acquired in-situ, from a decommissioned bridge under realistic damage scenarios. Three feature extraction methods are applied to sensor data to generate feature vectors for normal and damaged structure data patterns. The investigated feature extraction methods include identification of both time domain methods as well as frequency domain methods. The evaluation of the feature extraction methods is performed by examining distance values among different patterns, distance values among feature vectors in the same pattern, and pattern recognition success rate. The test data used in the comparison study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case data sets, including undamaged cases and pier settlement cases (different depths), are used to test the separation of feature vectors among different patterns and the pattern recognition success rate for different feature extraction methods is reported.

  7. A Review of Feature Extraction Software for Microarray Gene Expression Data

    PubMed Central

    Tan, Ching Siang; Ting, Wai Soon; Mohamad, Mohd Saberi; Chan, Weng Howe; Deris, Safaai; Ali Shah, Zuraini

    2014-01-01

    When gene expression data are too large to be processed, they are transformed into a reduced representation set of genes. Transforming large-scale gene expression data into a set of genes is called feature extraction. If the genes extracted are carefully chosen, this gene set can extract the relevant information from the large-scale gene expression data, allowing further analysis by using this reduced representation instead of the full size data. In this paper, we review numerous software applications that can be used for feature extraction. The software reviewed is mainly for Principal Component Analysis (PCA), Independent Component Analysis (ICA), Partial Least Squares (PLS), and Local Linear Embedding (LLE). A summary and sources of the software are provided in the last section for each feature extraction method. PMID:25250315

  8. Sparse representation of transients in wavelet basis and its application in gearbox fault feature extraction

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Cai, Gaigai; Zhu, Z. K.; Shen, Changqing; Huang, Weiguo; Shang, Li

    2015-05-01

    Vibration signals from a defective gearbox are often associated with important measurement information useful for gearbox fault diagnosis. The extraction of transient features from the vibration signals has always been a key issue for detecting the localized fault. In this paper, a new transient feature extraction technique is proposed for gearbox fault diagnosis based on sparse representation in wavelet basis. With the proposed method, both the impulse time and the period of transients can be effectively identified, and thus the transient features can be extracted. The effectiveness of the proposed method is verified by the simulated signals as well as the practical gearbox vibration signals. Comparison study shows that the proposed method outperforms empirical mode decomposition (EMD) in transient feature extraction.

  9. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram

    PubMed Central

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  10. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-01-01

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features. PMID:27649171

  11. Weak Fault Feature Extraction of Rolling Bearings Based on an Improved Kurtogram.

    PubMed

    Chen, Xianglong; Feng, Fuzhou; Zhang, Bingzhi

    2016-09-13

    Kurtograms have been verified to be an efficient tool in bearing fault detection and diagnosis because of their superiority in extracting transient features. However, the short-time Fourier Transform is insufficient in time-frequency analysis and kurtosis is deficient in detecting cyclic transients. Those factors weaken the performance of the original kurtogram in extracting weak fault features. Correlated Kurtosis (CK) is then designed, as a more effective solution, in detecting cyclic transients. Redundant Second Generation Wavelet Packet Transform (RSGWPT) is deemed to be effective in capturing more detailed local time-frequency description of the signal, and restricting the frequency aliasing components of the analysis results. The authors in this manuscript, combining the CK with the RSGWPT, propose an improved kurtogram to extract weak fault features from bearing vibration signals. The analysis of simulation signals and real application cases demonstrate that the proposed method is relatively more accurate and effective in extracting weak fault features.

  12. D Feature Point Extraction from LIDAR Data Using a Neural Network

    NASA Astrophysics Data System (ADS)

    Feng, Y.; Schlichting, A.; Brenner, C.

    2016-06-01

    Accurate positioning of vehicles plays an important role in autonomous driving. In our previous research on landmark-based positioning, poles were extracted both from reference data and online sensor data, which were then matched to improve the positioning accuracy of the vehicles. However, there are environments which contain only a limited number of poles. 3D feature points are one of the proper alternatives to be used as landmarks. They can be assumed to be present in the environment, independent of certain object classes. To match the LiDAR data online to another LiDAR derived reference dataset, the extraction of 3D feature points is an essential step. In this paper, we address the problem of 3D feature point extraction from LiDAR datasets. Instead of hand-crafting a 3D feature point extractor, we propose to train it using a neural network. In this approach, a set of candidates for the 3D feature points is firstly detected by the Shi-Tomasi corner detector on the range images of the LiDAR point cloud. Using a back propagation algorithm for the training, the artificial neural network is capable of predicting feature points from these corner candidates. The training considers not only the shape of each corner candidate on 2D range images, but also their 3D features such as the curvature value and surface normal value in z axis, which are calculated directly based on the LiDAR point cloud. Subsequently the extracted feature points on the 2D range images are retrieved in the 3D scene. The 3D feature points extracted by this approach are generally distinctive in the 3D space. Our test shows that the proposed method is capable of providing a sufficient number of repeatable 3D feature points for the matching task. The feature points extracted by this approach have great potential to be used as landmarks for a better localization of vehicles.

  13. Mean-shift tracking algorithm based on adaptive fusion of multi-feature

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Xiao, Yanghui; Wang, Ende; Feng, Junhui

    2015-10-01

    The classic mean-shift tracking algorithm has achieved success in the field of computer vision because of its speediness and efficiency. However, classic mean-shift tracking algorithm would fail to track in some complicated conditions such as some parts of the target are occluded, little color difference between the target and background exists, or sudden change of illumination and so on. In order to solve the problems, an improved algorithm is proposed based on the mean-shift tracking algorithm and adaptive fusion of features. Color, edges and corners of the target are used to describe the target in the feature space, and a method for measuring the discrimination of various features is presented to make feature selection adaptive. Then the improved mean-shift tracking algorithm is introduced based on the fusion of various features. For the purpose of solving the problem that mean-shift tracking algorithm with the single color feature is vulnerable to sudden change of illumination, we eliminate the effects by the fusion of affine illumination model and color feature space which ensures the correctness and stability of target tracking in that condition. Using a group of videos to test the proposed algorithm, the results show that the tracking correctness and stability of this algorithm are better than the mean-shift tracking algorithm with single feature space. Furthermore the proposed algorithm is more robust than the classic algorithm in the conditions of occlusion, target similar with background or illumination change.

  14. One Feature of Adaptive Lesson Study in Thailand: Designing a Learning Unit

    ERIC Educational Resources Information Center

    Inprasitha, Maitree

    2011-01-01

    In Thailand, the Center for Research in Mathematics Education (CRME) has been implementing Japanese Lesson Study (LS) since 2002. An adaptive feature of this implementation was the incorporation of four phases of the Open Approach as a teaching approach within the three steps of the LS process. Four phases of this open approach are: 1) Posing…

  15. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  16. A Spatial Division Clustering Method and Low Dimensional Feature Extraction Technique Based Indoor Positioning System

    PubMed Central

    Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao

    2014-01-01

    Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470

  17. A comparison of different feature extraction methods for diagnosis of valvular heart diseases using PCG signals.

    PubMed

    Rouhani, M; Abdoli, R

    2012-01-01

    This article presents a novel method for diagnosis of valvular heart disease (VHD) based on phonocardiography (PCG) signals. Application of the pattern classification and feature selection and reduction methods in analysing normal and pathological heart sound was investigated. After signal preprocessing using independent component analysis (ICA), 32 features are extracted. Those include carefully selected linear and nonlinear time domain, wavelet and entropy features. By examining different feature selection and feature reduction methods such as principal component analysis (PCA), genetic algorithms (GA), genetic programming (GP) and generalized discriminant analysis (GDA), the four most informative features are extracted. Furthermore, support vector machines (SVM) and neural network classifiers are compared for diagnosis of pathological heart sounds. Three valvular heart diseases are considered: aortic stenosis (AS), mitral stenosis (MS) and mitral regurgitation (MR). An overall accuracy of 99.47% was achieved by proposed algorithm.

  18. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  19. Biometric person authentication method using features extracted from pen holding style

    NASA Astrophysics Data System (ADS)

    Hashimoto, Yuuki; Muramatsu, Daigo; Ogata, Hiroyuki

    2010-04-01

    The manner of holding a pen is distinctive among people. Therefore, pen holding style is useful for person authentication. In this paper, we propose a biometric person authentication method using features extracted from images of pen holding style. Images of the pen holding style are captured by a camera, and several features are extracted from the captured images. These features are compared with a reference dataset to calculate dissimilarity scores, and these scores are combined for verification using a three-layer perceptron. Preliminary experiments were performed by using a private database. The proposed system yielded an equal error rate (EER) of 2.6%.

  20. Invariant feature extraction for color image mosaic by graph card processing

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Chen, Lin; Li, Deren

    2009-10-01

    Image mosaic can be widely used in remote measuring, scout in battlefield and Panasonic image demonstration. In this project, we find a general method for video (or sequence images) mosaic by techniques, such as extracting invariant features, gpu processing, multi-color feature selection, ransac algorithm for homograph matching. In order to match the image sequence automatically without influence of rotation, scale and contrast transform, local invariant feature descriptor have been extracted by graph card unit. The gpu mosaic algorithm performs very well that can be compare to slow CPU version of mosaic program with little cost time.

  1. A Neuro-Fuzzy System for Extracting Environment Features Based on Ultrasonic Sensors

    PubMed Central

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  2. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case.

  3. [Identification of special quality eggs with NIR spectroscopy technology based on symbol entropy feature extraction method].

    PubMed

    Zhao, Yong; Hong, Wen-Xue

    2011-11-01

    Fast, nondestructive and accurate identification of special quality eggs is an urgent problem. The present paper proposed a new feature extraction method based on symbol entropy to identify near infrared spectroscopy of special quality eggs. The authors selected normal eggs, free range eggs, selenium-enriched eggs and zinc-enriched eggs as research objects and measured the near-infrared diffuse reflectance spectra in the range of 12 000-4 000 cm(-1). Raw spectra were symbolically represented with aggregation approximation algorithm and symbolic entropy was extracted as feature vector. An error-correcting output codes multiclass support vector machine classifier was designed to identify the spectrum. Symbolic entropy feature is robust when parameter changed and the highest recognition rate reaches up to 100%. The results show that the identification method of special quality eggs using near-infrared is feasible and the symbol entropy can be used as a new feature extraction method of near-infrared spectra.

  4. Feature Extraction for BCIs Based on Electromagnetic Source Localization and Multiclass Filter Bank Common Spatial Patterns.

    PubMed

    Zaitcev, Aleksandr; Cook, Greg; Wei Liu; Paley, Martyn; Milne, Elizabeth

    2015-08-01

    Brain-Computer Interfaces (BCIs) provide means for communication and control without muscular movement and, therefore, can offer significant clinical benefits. Electrical brain activity recorded by electroencephalography (EEG) can be interpreted into software commands by various classification algorithms according to the descriptive features of the signal. In this paper we propose a novel EEG BCI feature extraction method employing EEG source reconstruction and Filter Bank Common Spatial Patterns (FBCSP) based on Joint Approximate Diagonalization (JAD). The proposed method is evaluated by the commonly used reference EEG dataset yielding an average classification accuracy of 77.1 ± 10.1 %. It is shown that FBCSP feature extraction applied to reconstructed source components outperforms conventional CSP and FBCSP feature extraction methods applied to signals in the sensor domain.

  5. A neuro-fuzzy system for extracting environment features based on ultrasonic sensors.

    PubMed

    Marichal, Graciliano Nicolás; Hernández, Angela; Acosta, Leopoldo; González, Evelio José

    2009-01-01

    In this paper, a method to extract features of the environment based on ultrasonic sensors is presented. A 3D model of a set of sonar systems and a workplace has been developed. The target of this approach is to extract in a short time, while the vehicle is moving, features of the environment. Particularly, the approach shown in this paper has been focused on determining walls and corners, which are very common environment features. In order to prove the viability of the devised approach, a 3D simulated environment has been built. A Neuro-Fuzzy strategy has been used in order to extract environment features from this simulated model. Several trials have been carried out, obtaining satisfactory results in this context. After that, some experimental tests have been conducted using a real vehicle with a set of sonar systems. The obtained results reveal the satisfactory generalization properties of the approach in this case. PMID:22303160

  6. Design and adaptation of a novel supercritical extraction facility for operation in a glove box for recovery of radioactive elements

    SciTech Connect

    Kumar, V. Suresh; Kumar, R.; Sivaraman, N.; Ravisankar, G.; Vasudeva Rao, P. R.

    2010-09-15

    The design and development of a novel supercritical extraction experimental facility adapted for safe operation in a glove box for the recovery of radioactive elements from waste is described. The apparatus incorporates a high pressure extraction vessel, reciprocating pumps for delivering supercritical fluid and reagent, a back pressure regulator, and a collection chamber. All these components of the system have been specially designed for glove box adaptation and made modular to facilitate their replacement. Confinement of these materials must be ensured in a glove box to protect the operator and prevent contamination to the work area. Since handling of radioactive materials under high pressure (30 MPa) and temperature (up to 333 K) is involved in this process, the apparatus needs elaborate safety features in the design of the equipment, as well as modification of a standard glove box to accommodate the system. As a special safety feature to contain accidental leakage of carbon dioxide from the extraction vessel, a safety vessel has been specially designed and placed inside the glove box. The extraction vessel was enclosed in the safety vessel. The safety vessel was also incorporated with pressure sensing and controlling device.

  7. Recognition of a Phase-Sensitivity OTDR Sensing System Based on Morphologic Feature Extraction

    PubMed Central

    Sun, Qian; Feng, Hao; Yan, Xueying; Zeng, Zhoumo

    2015-01-01

    This paper proposes a novel feature extraction method for intrusion event recognition within a phase-sensitive optical time-domain reflectometer (Φ-OTDR) sensing system. Feature extraction of time domain signals in these systems is time-consuming and may lead to inaccuracies due to noise disturbances. The recognition accuracy and speed of current systems cannot meet the requirements of Φ-OTDR online vibration monitoring systems. In the method proposed in this paper, the time-space domain signal is used for feature extraction instead of the time domain signal. Feature vectors are obtained from morphologic features of time-space domain signals. A scatter matrix is calculated for the feature selection. Experiments show that the feature extraction method proposed in this paper can greatly improve recognition accuracies, with a lower computation time than traditional methods, i.e., a recognition accuracy of 97.8% can be achieved with a recognition time of below 1 s, making it is very suitable for Φ-OTDR system online vibration monitoring. PMID:26131671

  8. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  9. Nonparametric feature extraction for classification of hyperspectral images with limited training samples

    NASA Astrophysics Data System (ADS)

    Kianisarkaleh, Azadeh; Ghassemian, Hassan

    2016-09-01

    Feature extraction plays a crucial role in improvement of hyperspectral images classification. Nonparametric feature extraction methods show better performance compared to parametric ones when distribution of classes is non normal-like. Moreover, they can extract more features than parametric methods do. In this paper, a new nonparametric linear feature extraction method is introduced for classification of hyperspectral images. The proposed method has no free parameter and its novelty can be discussed in two parts. First, neighbor samples are specified by using Parzen window idea for determining local mean. Second, two new weighting functions are used. Samples close to class boundaries will have more weight in the between-class scatter matrix formation and samples close to class mean will have more weight in the within-class scatter matrix formation. The experimental results on three real hyperspectral data sets, Indian Pines, Salinas and Pavia University, demonstrate that the proposed method has better performance in comparison with some other nonparametric and parametric feature extraction methods.

  10. [Quantitative analysis of thiram by surface-enhanced raman spectroscopy combined with feature extraction Algorithms].

    PubMed

    Zhang, Bao-hua; Jiang, Yong-cheng; Sha, Wen; Zhang, Xian-yi; Cui, Zhi-feng

    2015-02-01

    Three feature extraction algorithms, such as the principal component analysis (PCA), the discrete cosine transform (DCT) and the non-negative factorization (NMF), were used to extract the main information of the spectral data in order to weaken the influence of the spectral fluctuation on the subsequent quantitative analysis results based on the SERS spectra of the pesticide thiram. Then the extracted components were respectively combined with the linear regression algorithm--the partial least square regression (PLSR) and the non-linear regression algorithm--the support vector machine regression (SVR) to develop the quantitative analysis models. Finally, the effect of the different feature extraction algorithms on the different kinds of the regression algorithms was evaluated by using 5-fold cross-validation method. The experiments demonstrate that the analysis results of SVR are better than PLSR for the non-linear relationship between the intensity of the SERS spectrum and the concentration of the analyte. Further, the feature extraction algorithms can significantly improve the analysis results regardless of the regression algorithms which mainly due to extracting the main information of the source spectral data and eliminating the fluctuation. Additionally, PCA performs best on the linear regression model and NMF is best on the non-linear model, and the predictive error can be reduced nearly three times in the best case. The root mean square error of cross-validation of the best regression model (NMF+SVR) is 0.0455 micormol x L(-1) (10(-6) mol x L(-1)), and it attains the national detection limit of thiram, so the method in this study provides a novel method for the fast detection of thiram. In conclusion, the study provides the experimental references the selecting the feature extraction algorithms on the analysis of the SERS spectrum, and some common findings of feature extraction can also help processing of other kinds of spectroscopy.

  11. Real-Time Tracking Framework with Adaptive Features and Constrained Labels

    PubMed Central

    Li, Daqun; Xu, Tingfa; Chen, Shuoyang; Zhang, Jizhou; Jiang, Shenwang

    2016-01-01

    This paper proposes a novel tracking framework with adaptive features and constrained labels (AFCL) to handle illumination variation, occlusion and appearance changes caused by the variation of positions. The novel ensemble classifier, including the Forward–Backward error and the location constraint is applied, to get the precise coordinates of the promising bounding boxes. The Forward–Backward error can enhance the adaptation and accuracy of the binary features, whereas the location constraint can overcome the label noise to a certain degree. We use the combiner which can evaluate the online templates and the outputs of the classifier to accommodate the complex situation. Evaluation of the widely used tracking benchmark shows that the proposed framework can significantly improve the tracking accuracy, and thus reduce the processing time. The proposed framework has been tested and implemented on the embedded system using TMS320C6416 and Cyclone Ⅲ kernel processors. The outputs show that achievable and satisfying results can be obtained. PMID:27618052

  12. Real-Time Tracking Framework with Adaptive Features and Constrained Labels.

    PubMed

    Li, Daqun; Xu, Tingfa; Chen, Shuoyang; Zhang, Jizhou; Jiang, Shenwang

    2016-01-01

    This paper proposes a novel tracking framework with adaptive features and constrained labels (AFCL) to handle illumination variation, occlusion and appearance changes caused by the variation of positions. The novel ensemble classifier, including the Forward-Backward error and the location constraint is applied, to get the precise coordinates of the promising bounding boxes. The Forward-Backward error can enhance the adaptation and accuracy of the binary features, whereas the location constraint can overcome the label noise to a certain degree. We use the combiner which can evaluate the online templates and the outputs of the classifier to accommodate the complex situation. Evaluation of the widely used tracking benchmark shows that the proposed framework can significantly improve the tracking accuracy, and thus reduce the processing time. The proposed framework has been tested and implemented on the embedded system using TMS320C6416 and Cyclone Ⅲ kernel processors. The outputs show that achievable and satisfying results can be obtained. PMID:27618052

  13. Airborne LIDAR and high resolution satellite data for rapid 3D feature extraction

    NASA Astrophysics Data System (ADS)

    Jawak, S. D.; Panditrao, S. N.; Luis, A. J.

    2014-11-01

    This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM

  14. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set

    PubMed Central

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus. PMID:26347797

  15. A Relation Extraction Framework for Biomedical Text Using Hybrid Feature Set.

    PubMed

    Muzaffar, Abdul Wahab; Azam, Farooque; Qamar, Usman

    2015-01-01

    The information extraction from unstructured text segments is a complex task. Although manual information extraction often produces the best results, it is harder to manage biomedical data extraction manually because of the exponential increase in data size. Thus, there is a need for automatic tools and techniques for information extraction in biomedical text mining. Relation extraction is a significant area under biomedical information extraction that has gained much importance in the last two decades. A lot of work has been done on biomedical relation extraction focusing on rule-based and machine learning techniques. In the last decade, the focus has changed to hybrid approaches showing better results. This research presents a hybrid feature set for classification of relations between biomedical entities. The main contribution of this research is done in the semantic feature set where verb phrases are ranked using Unified Medical Language System (UMLS) and a ranking algorithm. Support Vector Machine and Naïve Bayes, the two effective machine learning techniques, are used to classify these relations. Our approach has been validated on the standard biomedical text corpus obtained from MEDLINE 2001. Conclusively, it can be articulated that our framework outperforms all state-of-the-art approaches used for relation extraction on the same corpus.

  16. Using Mobile Laser Scanning Data for Features Extraction of High Accuracy Driving Maps

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Liu, Yuan; Liang, Fuxun; Dong, Zhen

    2016-06-01

    High Accuracy Driving Maps (HADMs) are the core component of Intelligent Drive Assistant Systems (IDAS), which can effectively reduce the traffic accidents due to human error and provide more comfortable driving experiences. Vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. This paper proposes a novel method to extract road features (e.g., road surfaces, road boundaries, road markings, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, vehicles and so on) for HADMs in highway environment. Quantitative evaluations show that the proposed algorithm attains an average precision and recall in terms of 90.6% and 91.2% in extracting road features. Results demonstrate the efficiencies and feasibilities of the proposed method for extraction of road features for HADMs.

  17. Feature extraction of rolling bearing’s early weak fault based on EEMD and tunable Q-factor wavelet transform

    NASA Astrophysics Data System (ADS)

    Wang, Hongchao; Chen, Jin; Dong, Guangming

    2014-10-01

    When early weak fault emerges in rolling bearing the fault feature is too weak to extract using the traditional fault diagnosis methods such as Fast Fourier Transform (FFT) and envelope demodulation. The tunable Q-factor wavelet transform (TQWT) is the improvement of traditional one single Q-factor wavelet transform, and it is very fit for separating the low Q-factor transient impact component from the high Q-factor sustained oscillation components when fault emerges in rolling bearing. However, it is hard to extract the rolling bearing’ early weak fault feature perfectly using the TQWT directly. Ensemble empirical mode decomposition (EEMD) is the improvement of empirical mode decomposition (EMD) which not only has the virtue of self-adaptability of EMD but also overcomes the mode mixing problem of EMD. The original signal of rolling bearing’ early weak fault is decomposed by EEMD and several intrinsic mode functions (IMFs) are obtained. Then the IMF with biggest kurtosis index value is selected and handled by the TQWT subsequently. At last, the envelope demodulation method is applied on the low Q-factor transient impact component and satisfactory extraction result is obtained.

  18. Visualizing and Tracking Evolving Features in 3D Unstructured and Adaptive Datasets

    SciTech Connect

    Silver, D.; Zabusky, N.

    2002-08-01

    The massive amounts of time-varying datasets being generated demand new visualization and quantification techniques. Visualization alone is not sufficient. Without proper measurement information/computations real science cannot be done. Our focus is this work was to combine visualization with quantification of the data to allow for advanced querying and searching. As part of this proposal, we have developed a feature extraction adn tracking methodology which allows researcher to identify features of interest and follow their evolution over time. The implementation is distributed and operates over data In-situ: where it is stored and when it was computed.

  19. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  20. Combining Feature Extraction Methods to Assist the Diagnosis of Alzheimer's Disease.

    PubMed

    Segovia, F; Górriz, J M; Ramírez, J; Phillips, C; For The Alzheimer's Disease Neuroimaging Initiative

    2016-01-01

    Neuroimaging data as (18)F-FDG PET is widely used to assist the diagnosis of Alzheimer's disease (AD). Looking for regions with hypoperfusion/ hypometabolism, clinicians may predict or corroborate the diagnosis of the patients. Modern computer aided diagnosis (CAD) systems based on the statistical analysis of whole neuroimages are more accurate than classical systems based on quantifying the uptake of some predefined regions of interests (ROIs). In addition, these new systems allow determining new ROIs and take advantage of the huge amount of information comprised in neuroimaging data. A major branch of modern CAD systems for AD is based on multivariate techniques, which analyse a neuroimage as a whole, considering not only the voxel intensities but also the relations among them. In order to deal with the vast dimensionality of the data, a number of feature extraction methods have been successfully applied. In this work, we propose a CAD system based on the combination of several feature extraction techniques. First, some commonly used feature extraction methods based on the analysis of the variance (as principal component analysis), on the factorization of the data (as non-negative matrix factorization) and on classical magnitudes (as Haralick features) were simultaneously applied to the original data. These feature sets were then combined by means of two different combination approaches: i) using a single classifier and a multiple kernel learning approach and ii) using an ensemble of classifier and selecting the final decision by majority voting. The proposed approach was evaluated using a labelled neuroimaging database along with a cross validation scheme. As conclusion, the proposed CAD system performed better than approaches using only one feature extraction technique. We also provide a fair comparison (using the same database) of the selected feature extraction methods. PMID:26567734

  1. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/.

  2. miRNAfe: A comprehensive tool for feature extraction in microRNA prediction.

    PubMed

    Yones, Cristian A; Stegmayer, Georgina; Kamenetzky, Laura; Milone, Diego H

    2015-12-01

    miRNAfe is a comprehensive tool to extract features from RNA sequences. It is freely available as a web service, allowing a single access point to almost all state-of-the-art feature extraction methods used today in a variety of works from different authors. It has a very simple user interface, where the user only needs to load a file containing the input sequences and select the features to extract. As a result, the user obtains a text file with the features extracted, which can be used to analyze the sequences or as input to a miRNA prediction software. The tool can calculate up to 80 features where many of them are multidimensional arrays. In order to simplify the web interface, the features have been divided into six pre-defined groups, each one providing information about: primary sequence, secondary structure, thermodynamic stability, statistical stability, conservation between genomes of different species and substrings analysis of the sequences. Additionally, pre-trained classifiers are provided for prediction in different species. All algorithms to extract the features have been validated, comparing the results with the ones obtained from software of the original authors. The source code is freely available for academic use under GPL license at http://sourceforge.net/projects/sourcesinc/files/mirnafe/0.90/. A user-friendly access is provided as web interface at http://fich.unl.edu.ar/sinc/web-demo/mirnafe/. A more configurable web interface can be accessed at http://fich.unl.edu.ar/sinc/web-demo/mirnafe-full/. PMID:26499212

  3. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  4. Extraction, modelling, and use of linear features for restitution of airborne hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lee, Changno; Bethel, James S.

    This paper presents an approach for the restitution of airborne hyperspectral imagery with linear features. The approach consisted of semi-automatic line extraction and mathematical modelling of the linear features. First, the line was approximately determined manually and refined using dynamic programming. The extracted lines could then be used as control data with the ground information of the lines, or as constraints with simple assumption for the ground information of the line. The experimental results are presented numerically in tables of RMS residuals of check points as well as visually in ortho-rectified images.

  5. Self-Adaptive MOEA Feature Selection for Classification of Bankruptcy Prediction Data

    PubMed Central

    Gaspar-Cunha, A.; Recio, G.; Costa, L.; Estébanez, C.

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  6. Self-adaptive MOEA feature selection for classification of bankruptcy prediction data.

    PubMed

    Gaspar-Cunha, A; Recio, G; Costa, L; Estébanez, C

    2014-01-01

    Bankruptcy prediction is a vast area of finance and accounting whose importance lies in the relevance for creditors and investors in evaluating the likelihood of getting into bankrupt. As companies become complex, they develop sophisticated schemes to hide their real situation. In turn, making an estimation of the credit risks associated with counterparts or predicting bankruptcy becomes harder. Evolutionary algorithms have shown to be an excellent tool to deal with complex problems in finances and economics where a large number of irrelevant features are involved. This paper provides a methodology for feature selection in classification of bankruptcy data sets using an evolutionary multiobjective approach that simultaneously minimise the number of features and maximise the classifier quality measure (e.g., accuracy). The proposed methodology makes use of self-adaptation by applying the feature selection algorithm while simultaneously optimising the parameters of the classifier used. The methodology was applied to four different sets of data. The obtained results showed the utility of using the self-adaptation of the classifier. PMID:24707201

  7. Enhancement of the Feature Extraction Capability in Global Damage Detection Using Wavelet Theory

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Ponnaluru, Gopi Krishna

    2006-01-01

    The main objective of this study is to assess the specific capabilities of the defect energy parameter technique for global damage detection developed by Saleeb and coworkers. The feature extraction is the most important capability in any damage-detection technique. Features are any parameters extracted from the processed measurement data in order to enhance damage detection. The damage feature extraction capability was studied extensively by analyzing various simulation results. The practical significance in structural health monitoring is that the detection at early stages of small-size defects is always desirable. The amount of changes in the structure's response due to these small defects was determined to show the needed level of accuracy in the experimental methods. The arrangement of fine/extensive sensor network to measure required data for the detection is an "unlimited" ability, but there is a difficulty to place extensive number of sensors on a structure. Therefore, an investigation was conducted using the measurements of coarse sensor network. The white and the pink noises, which cover most of the frequency ranges that are typically encountered in the many measuring devices used (e.g., accelerometers, strain gauges, etc.) are added to the displacements to investigate the effect of noisy measurements in the detection technique. The noisy displacements and the noisy damage parameter values are used to study the signal feature reconstruction using wavelets. The enhancement of the feature extraction capability was successfully achieved by the wavelet theory.

  8. Application of multi-scale feature extraction to surface defect classification of hot-rolled steels

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Ai, Yong-hao; Wu, Xiu-yong

    2013-01-01

    Feature extraction is essential to the classification of surface defect images. The defects of hot-rolled steels distribute in different directions. Therefore, the methods of multi-scale geometric analysis (MGA) were employed to decompose the image into several directional subbands at several scales. Then, the statistical features of each subband were calculated to produce a high-dimensional feature vector, which was reduced to a lower-dimensional vector by graph embedding algorithms. Finally, support vector machine (SVM) was used for defect classification. The multi-scale feature extraction method was implemented via curvelet transform and kernel locality preserving projections (KLPP). Experiment results show that the proposed method is effective for classifying the surface defects of hot-rolled steels and the total classification rate is up to 97.33%.

  9. An Investigation of Place and Voice Features Using fMRI-Adaptation.

    PubMed

    Lawyer, Laurel; Corina, David

    2014-01-01

    A widely accepted view of speech perception holds that in order to comprehend language, the variable acoustic signal must be parsed into a set of abstract linguistic representations. However, the neural basis of early phonological processing, including the nature of featural encoding of speech, is still poorly understood. In part, progress in this domain has been constrained by the difficulty inherent in extricating the influence of acoustic modulations from those which can be ascribed to the abstract, featural content of the stimuli. A further concern is that group averaging techniques may obscure subtle individual differences in cortical regions involved in early language processing. In this paper we present the results of an fMRI-adaptation experiment which finds evidence of areas in the superior and medial temporal lobes which respond selectively to changes in the major feature categories of voicing and place of articulation. We present both single-subject and group-averaged analyses. PMID:24187438

  10. Computer-aided diagnosis of rheumatoid arthritis with optical tomography, Part 1: feature extraction

    PubMed Central

    Jia, Jingfei; Kim, Hyun K.; Netz, Uwe J.; Blaschke, Sabine; Müller, Gerhard A.

    2013-01-01

    Abstract. This is the first part of a two-part paper on the application of computer-aided diagnosis to diffuse optical tomography (DOT). An approach for extracting heuristic features from DOT images and a method for using these features to diagnose rheumatoid arthritis (RA) are presented. Feature extraction is the focus of Part 1, while the utility of five classification algorithms is evaluated in Part 2. The framework is validated on a set of 219 DOT images of proximal interphalangeal (PIP) joints. Overall, 594 features are extracted from the absorption and scattering images of each joint. Three major findings are deduced. First, DOT images of subjects with RA are statistically different (p<0.05) from images of subjects without RA for over 90% of the features investigated. Second, DOT images of subjects with RA that do not have detectable effusion, erosion, or synovitis (as determined by MRI and ultrasound) are statistically indistinguishable from DOT images of subjects with RA that do exhibit effusion, erosion, or synovitis. Thus, this subset of subjects may be diagnosed with RA from DOT images while they would go undetected by reviews of MRI or ultrasound images. Third, scattering coefficient images yield better one-dimensional classifiers. A total of three features yield a Youden index greater than 0.8. These findings suggest that DOT may be capable of distinguishing between PIP joints that are healthy and those affected by RA with or without effusion, erosion, or synovitis. PMID:23856915

  11. Consistent performance measurement of a system to detect masses in mammograms based on blind feature extraction

    PubMed Central

    2013-01-01

    Background Breast cancer continues to be a leading cause of cancer deaths among women, especially in Western countries. In the last two decades, many methods have been proposed to achieve a robust mammography‐based computer aided detection (CAD) system. A CAD system should provide high performance over time and in different clinical situations. I.e., the system should be adaptable to different clinical situations and should provide consistent performance. Methods We tested our system seeking a measure of the guarantee of its consistent performance. The method is based on blind feature extraction by independent component analysis (ICA) and classification by neural networks (NN) or SVM classifiers. The test mammograms were from the Digital Database for Screening Mammography (DDSM). This database was constructed collaboratively by four institutions over more than 10 years. We took advantage of this to train our system using the mammograms from each institution separately, and then testing it on the remaining mammograms. We performed another experiment to compare the results and thus obtain the measure sought. This experiment consists in to form the learning sets with all available prototypes regardless of the institution in which them were generated, obtaining in that way the overall results. Results The smallest variation from comparing the results of the testing set in each experiment (performed by training the system using the mammograms from one institution and testing with the remaining) with those of the overall result, considering the success rate for an intermediate decision maker threshold, was roughly 5%, and the largest variation was roughly 17%. But, if we considere the area under ROC curve, the smallest variation was close to 4%, and the largest variation was about a 6%. Conclusions Considering the heterogeneity in the datasets used to train and test our system in each case, we think that the variation of performance obtained when the results are

  12. Application of computer-extracted breast tissue texture features in predicting false-positive recalls from screening mammography

    NASA Astrophysics Data System (ADS)

    Ray, Shonket; Choi, Jae Y.; Keller, Brad M.; Chen, Jinbo; Conant, Emily F.; Kontos, Despina

    2014-03-01

    Mammographic texture features have been shown to have value in breast cancer risk assessment. Previous models have also been developed that use computer-extracted mammographic features of breast tissue complexity to predict the risk of false-positive (FP) recall from breast cancer screening with digital mammography. This work details a novel locallyadaptive parenchymal texture analysis algorithm that identifies and extracts mammographic features of local parenchymal tissue complexity potentially relevant for false-positive biopsy prediction. This algorithm has two important aspects: (1) the adaptive nature of automatically determining an optimal number of region-of-interests (ROIs) in the image and each ROI's corresponding size based on the parenchymal tissue distribution over the whole breast region and (2) characterizing both the local and global mammographic appearances of the parenchymal tissue that could provide more discriminative information for FP biopsy risk prediction. Preliminary results show that this locallyadaptive texture analysis algorithm, in conjunction with logistic regression, can predict the likelihood of false-positive biopsy with an ROC performance value of AUC=0.92 (p<0.001) with a 95% confidence interval [0.77, 0.94]. Significant texture feature predictors (p<0.05) included contrast, sum variance and difference average. Sensitivity for false-positives was 51% at the 100% cancer detection operating point. Although preliminary, clinical implications of using prediction models incorporating these texture features may include the future development of better tools and guidelines regarding personalized breast cancer screening recommendations. Further studies are warranted to prospectively validate our findings in larger screening populations and evaluate their clinical utility.

  13. Age-Related Changes of Adaptive and Neuropsychological Features in Persons with Down Syndrome

    PubMed Central

    Ghezzo, Alessandro; Salvioli, Stefano; Solimando, Maria Caterina; Palmieri, Alice; Chiostergi, Chiara; Scurti, Maria; Lomartire, Laura; Bedetti, Federica; Cocchi, Guido; Follo, Daniela; Pipitone, Emanuela; Rovatti, Paolo; Zamberletti, Jessica; Gomiero, Tiziano; Castellani, Gastone; Franceschi, Claudio

    2014-01-01

    Down Syndrome (DS) is characterised by premature aging and an accelerated decline of cognitive functions in the vast majority of cases. As the life expectancy of DS persons is rapidly increasing, this decline is becoming a dramatic health problem. The aim of this study was to thoroughly evaluate a group of 67 non-demented persons with DS of different ages (11 to 66 years), from a neuropsychological, neuropsychiatric and psychomotor point of view in order to evaluate in a cross-sectional study the age-related adaptive and neuropsychological features, and to possibly identify early signs predictive of cognitive decline. The main finding of this study is that both neuropsychological functions and adaptive skills are lower in adult DS persons over 40 years old, compared to younger ones. In particular, language and short memory skills, frontal lobe functions, visuo-spatial abilities and adaptive behaviour appear to be the more affected domains. A growing deficit in verbal comprehension, along with social isolation, loss of interest and greater fatigue in daily tasks, are the main features found in older, non demented DS persons evaluated in our study. It is proposed that these signs can be alarm bells for incipient dementia, and that neuro-cognitive rehabilitation and psycho-pharmacological interventions must start as soon as the fourth decade (or even earlier) in DS persons, i.e. at an age where interventions can have the greatest efficacy. PMID:25419980

  14. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  15. Shape-based and texture-based feature extraction for classification of microcalcifications in mammograms

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Pourabdollah-Nezhad, Siamak; Rafiee Rad, Farshid

    2001-07-01

    This paper presents and compares two image processing methods for differentiating benign from malignant microcalcifications in mammograms. The gold standard method for differentiating benign from malignant microcalcifications is biopsy, which is invasive. The goal of the proposed methods is to reduce rate of biopsies with negative results. In the first method, we extract 17 shape features from each mammogram. These features are related to shapes of individual microcalcifications or to their clusters. In the second method, we extract 44 texture features from each mammogram using co-occurrence method of Haralick. Next, we select best features from each set using a genetic algorithm, to maximize area under ROC curve. This curve is created using a k-nearest neighbor (kNN) classifier and a malignancy criterion. Finally, we evaluate the methods by comparing ROC's with greatest areas obtained using each method. We applied the proposed methods, with different values of k in kNN classifier, to 74 malignant and 29 benign microcalcification clusters. Truth for each mammogram was established based on the biopsy results. We found greatest area under ROC curve for each set of features used in each method. For shape features this area was 0.82 (k = 7) and for Haralick features it was 0.72(k=9).

  16. [Research on the methods for electroencephalogram feature extraction based on blind source separation].

    PubMed

    Wang, Jiang; Zhang, Huiyuan; Wang, Lei; Xu, Guizhi

    2014-12-01

    In the present investigation, we studied four methods of blind source separation/independent component analysis (BSS/ICA), AMUSE, SOBI, JADE, and FastICA. We did the feature extraction of electroencephalogram (EEG) signals of brain computer interface (BCI) for classifying spontaneous mental activities, which contained four mental tasks including imagination of left hand, right hand, foot and tongue movement. Different methods of extract physiological components were studied and achieved good performance. Then, three combined methods of SOBI and FastICA for extraction of EEG features of motor imagery were proposed. The results showed that combining of SOBI and ICA could not only reduce various artifacts and noise but also localize useful source and improve accuracy of BCI. It would improve further study of physiological mechanisms of motor imagery. PMID:25868229

  17. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  18. [Research on non-rigid medical image registration algorithm based on SIFT feature extraction].

    PubMed

    Wang, Anna; Lu, Dan; Wang, Zhe; Fang, Zhizhen

    2010-08-01

    In allusion to non-rigid registration of medical images, the paper gives a practical feature points matching algorithm--the image registration algorithm based on the scale-invariant features transform (Scale Invariant Feature Transform, SIFT). The algorithm makes use of the image features of translation, rotation and affine transformation invariance in scale space to extract the image feature points. Bidirectional matching algorithm is chosen to establish the matching relations between the images, so the accuracy of image registrations is improved. On this basis, affine transform is chosen to complement the non-rigid registration, and normalized mutual information measure and PSO optimization algorithm are also chosen to optimize the registration process. The experimental results show that the method can achieve better registration results than the method based on mutual information.

  19. A Feature Extraction Method for Fault Classification of Rolling Bearing based on PCA

    NASA Astrophysics Data System (ADS)

    Wang, Fengtao; Sun, Jian; Yan, Dawen; Zhang, Shenghua; Cui, Liming; Xu, Yong

    2015-07-01

    This paper discusses the fault feature selection using principal component analysis (PCA) for bearing faults classification. Multiple features selected from the time-frequency domain parameters of vibration signals are analyzed. First, calculate the time domain statistical features, such as root mean square and kurtosis; meanwhile, by Fourier transformation and Hilbert transformation, the frequency statistical features are extracted from the frequency spectrum. Then the PCA is used to reduce the dimension of feature vectors drawn from raw vibration signals, which can improve real time performance and accuracy of the fault diagnosis. Finally, a fuzzy C-means (FCM) model is established to implement the diagnosis of rolling bearing faults. Practical rolling bearing experiment data is used to verify the effectiveness of the proposed method.

  20. Multi-scale Analysis of High Resolution Topography: Feature Extraction and Identification of Landscape Characteristic Scales

    NASA Astrophysics Data System (ADS)

    Passalacqua, P.; Sangireddy, H.; Stark, C. P.

    2015-12-01

    With the advent of digital terrain data, detailed information on terrain characteristics and on scale and location of geomorphic features is available over extended areas. Our ability to observe landscapes and quantify topographic patterns has greatly improved, including the estimation of fluxes of mass and energy across landscapes. Challenges still remain in the analysis of high resolution topography data; the presence of features such as roads, for example, challenges classic methods for feature extraction and large data volumes require computationally efficient extraction and analysis methods. Moreover, opportunities exist to define new robust metrics of landscape characterization for landscape comparison and model validation. In this presentation we cover recent research in multi-scale and objective analysis of high resolution topography data. We show how the analysis of the probability density function of topographic attributes such as slope, curvature, and topographic index contains useful information for feature localization and extraction. The analysis of how the distributions change across scales, quantified by the behavior of modal values and interquartile range, allows the identification of landscape characteristic scales, such as terrain roughness. The methods are introduced on synthetic signals in one and two dimensions and then applied to a variety of landscapes of different characteristics. Validation of the methods includes the analysis of modeled landscapes where the noise distribution is known and features of interest easily measured.

  1. Feature Extraction from Simulations and Experiments: Preliminary Results Using a Fluid Mix Problem

    SciTech Connect

    Kamath, C; Nguyen, T

    2005-01-04

    Code validation, or comparing the output of computer simulations to experiments, is necessary to determine which simulation is a better approximation to an experiment. It can also be used to determine how the input parameters in a simulation can be modified to yield output that is closer to the experiment. In this report, we discuss our experiences in the use of image processing techniques for extracting features from 2-D simulations and experiments. These features can be used in comparing the output of simulations to experiments, or to other simulations. We first describe the problem domain and the data. We next explain the need for cleaning or denoising the experimental data and discuss the performance of different techniques. Finally, we discuss the features of interest and describe how they can be extracted from the data. The focus in this report is on extracting features from experimental and simulation data for the purpose of code validation; the actual interpretation of these features and their use in code validation is left to the domain experts.

  2. A Novel Hyperspectral Feature-Extraction Algorithm Based on Waveform Resolution for Raisin Classification.

    PubMed

    Zhao, Yun; Xu, Xing; He, Yong

    2015-12-01

    Near-infrared hyperspectral imaging technology was adopted in this study to discriminate among varieties of raisins produced in Xinjiang Uygur Autonomous Region, China. Eight varieties of raisins were used in the research, and the wavelengths of the hyperspectral images were from 900 to 1700 nm. A novel waveform resolution method is proposed to reduce the hyperspectral data and extract the features. The waveform-resolution method compresses the original hyperspectral data for one pixel into five amplitudes, five frequencies, and five phases for 15 feature values in all. A neural network was established with three layers-eight neurons for the first layer, three neurons for the hidden layer, and one neuron for the output layer-based on the 15 features used to determine the varieties of raisins. The accuracies of the model, which are presented as sensitivity, precision, and specificity, for the testing data set, are 93.38, 81.92, and 99.06%. This is higher than the accuracies of the model using a conventional principal component analysis feature-extracting method combined with a neural network, which has a sensitivity of 82.13%, precision of 82.22%, and specificity of 97.45%. The results indicate that the proposed waveform-resolution feature-extracting method combined with hyperspectral imaging technology is an efficient method for determining varieties of raisins. PMID:26555391

  3. Features extraction of EMG signal using time domain analysis for arm rehabilitation device

    NASA Astrophysics Data System (ADS)

    Jali, Mohd Hafiz; Ibrahim, Iffah Masturah; Sulaima, Mohamad Fani; Bukhari, W. M.; Izzuddin, Tarmizi Ahmad; Nasir, Mohamad Na'im

    2015-05-01

    Rehabilitation device is used as an exoskeleton for people who had failure of their limb. Arm rehabilitation device may help the rehab program whom suffers from arm disability. The device that is used to facilitate the tasks of the program should improve the electrical activity in the motor unit and minimize the mental effort of the user. Electromyography (EMG) is the techniques to analyze the presence of electrical activity in musculoskeletal systems. The electrical activity in muscles of disable person is failed to contract the muscle for movements. In order to prevent the muscles from paralysis becomes spasticity, the force of movements should minimize the mental efforts. Therefore, the rehabilitation device should analyze the surface EMG signal of normal people that can be implemented to the device. The signal is collected according to procedure of surface electromyography for non-invasive assessment of muscles (SENIAM). The EMG signal is implemented to set the movements' pattern of the arm rehabilitation device. The filtered EMG signal was extracted for features of Standard Deviation (STD), Mean Absolute Value (MAV) and Root Mean Square (RMS) in time-domain. The extraction of EMG data is important to have the reduced vector in the signal features with less of error. In order to determine the best features for any movements, several trials of extraction methods are used by determining the features with less of errors. The accurate features can be use for future works of rehabilitation control in real-time.

  4. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  5. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis.

    PubMed

    Toulminet, Gwenaëlle; Bertozzi, Massimo; Mousset, Stéphane; Bensrhair, Abdelaziz; Broggi, Alberto

    2006-08-01

    This paper presents a stereo vision system for the detection and distance computation of a preceding vehicle. It is divided in two major steps. Initially, a stereo vision-based algorithm is used to extract relevant three-dimensional (3-D) features in the scene, these features are investigated further in order to select the ones that belong to vertical objects only and not to the road or background. These 3-D vertical features are then used as a starting point for preceding vehicle detection; by using a symmetry operator, a match against a simplified model of a rear vehicle's shape is performed using a monocular vision-based approach that allows the identification of a preceding vehicle. In addition, using the 3-D information previously extracted, an accurate distance computation is performed.

  6. Extraction of ABCD rule features from skin lesions images with smartphone.

    PubMed

    Rosado, Luís; Castro, Rui; Ferreira, Liliana; Ferreira, Márcia

    2012-01-01

    One of the greatest challenges in dermatology today is the early detection of melanoma since the success rates of curing this type of cancer are very high if detected during the early stages of its development. The main objective of the work presented in this paper is to create a prototype of a patient-oriented system for skin lesion analysis using a smartphone. This work aims at implementing a self-monitoring system that collects, processes, and stores information of skin lesions through the automatic extraction of specific visual features. The selection of the features was based on the ABCD rule, which considers 4 visual criteria considered highly relevant for the detection of malignant melanoma. The algorithms used to extract these features are briefly described and the results achieved using images taken from the smartphone camera are discussed.

  7. Purification and feature extraction of shaft orbits for diagnosing large rotating machinery

    NASA Astrophysics Data System (ADS)

    Shi, D. F.; Wang, W. J.; Unsworth, P. J.; Qu, L. S.

    2005-01-01

    Vibration-based diagnosis has been employed as a powerful tool in maintaining the operating efficiency and safety for large rotating machinery. However, due to some inherent shortages, it is not accurate enough to extract the features of malfunctions by using traditional vibration signal processing techniques. In this paper, a high-resolution spectrum is firstly proposed to calculate the amplitude, frequency and phase details of sinusoidal harmonic and sub-harmonic vibration in large rotating machinery. Secondly, on the basis of a high-resolution spectrum, a purified shaft orbit is reconstructed to remove the interference terms. The moment and curve features, which are invariant to translation, scaling and rotation of the shaft orbit, are introduced to extract the features from the purified vibration orbit. This novel scheme is shown to be very effective and reliable in diagnosing several types of malfunctions in gas turbines and compressors under operating conditions as well as in the run-up stages.

  8. The extraction and use of facial features in low bit-rate visual communication.

    PubMed

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  9. SU-E-J-245: Sensitivity of FDG PET Feature Analysis in Multi-Plane Vs. Single-Plane Extraction

    SciTech Connect

    Harmon, S; Jeraj, R; Galavis, P

    2015-06-15

    Purpose: Sensitivity of PET-derived texture features to reconstruction methods has been reported for features extracted from axial planes; however, studies often utilize three dimensional techniques. This work aims to quantify the impact of multi-plane (3D) vs. single-plane (2D) feature extraction on radiomics-based analysis, including sensitivity to reconstruction parameters and potential loss of spatial information. Methods: Twenty-three patients with solid tumors underwent [{sup 18}F]FDG PET/CT scans under identical protocols. PET data were reconstructed using five sets of reconstruction parameters. Tumors were segmented using an automatic, in-house algorithm robust to reconstruction variations. 50 texture features were extracted using two Methods: 2D patches along axial planes and 3D patches. For each method, sensitivity of features to reconstruction parameters was calculated as percent difference relative to the average value across reconstructions. Correlations between feature values were compared when using 2D and 3D extraction. Results: 21/50 features showed significantly different sensitivity to reconstruction parameters when extracted in 2D vs 3D (wilcoxon α<0.05), assessed by overall range of variation, Rangevar(%). Eleven showed greater sensitivity to reconstruction in 2D extraction, primarily first-order and co-occurrence features (average Rangevar increase 83%). The remaining ten showed higher variation in 3D extraction (average Range{sub var}increase 27%), mainly co-occurence and greylevel run-length features. Correlation of feature value extracted in 2D and feature value extracted in 3D was poor (R<0.5) in 12/50 features, including eight co-occurrence features. Feature-to-feature correlations in 2D were marginally higher than 3D, ∣R∣>0.8 in 16% and 13% of all feature combinations, respectively. Larger sensitivity to reconstruction parameters were seen for inter-feature correlation in 2D(σ=6%) than 3D (σ<1%) extraction. Conclusion: Sensitivity

  10. Topology adaptive vessel network skeleton extraction with novel medialness measuring function.

    PubMed

    Zhu, Wen-Bo; Li, Bin; Tian, Lian-Fang; Li, Xiang-Xia; Chen, Qing-Lin

    2015-09-01

    Vessel tree skeleton extraction is widely applied in vascular structure segmentation, however, conventional approaches often suffer from the adjacent interferences and poor topological adaptability. To avoid these problems, a robust, topology adaptive tree-like structure skeleton extraction framework is proposed in this paper. Specifically, to avoid the adjacent interferences, a local message passing procedure called Gaussian affinity voting (GAV) is proposed to realize adaptive scale-growing of vessel voxels. Then the medialness measuring function (MMF) based on GAV, namely GAV-MMF, is constructed to extract medialness patterns robustly. In order to improve topological adaptability, a level-set graph embedded with GAV-MMF is employed to build initial curve skeletons without any user interaction. Furthermore, the GAV-MMF is embedded in stretching open active contours (SOAC) to drive the initial curves to the expected location, maintaining smoothness and continuity. In addition, to provide an accurate and smooth final skeleton tree topology, topological checks and skeleton network reconfiguration is proposed. The continuity and scalability of this method is validated experimentally on synthetic and clinical images for multi-scale vessels. Experimental results show that the proposed method achieves acceptable topological adaptability for skeleton extraction of vessel trees. PMID:26134626

  11. Topology adaptive vessel network skeleton extraction with novel medialness measuring function.

    PubMed

    Zhu, Wen-Bo; Li, Bin; Tian, Lian-Fang; Li, Xiang-Xia; Chen, Qing-Lin

    2015-09-01

    Vessel tree skeleton extraction is widely applied in vascular structure segmentation, however, conventional approaches often suffer from the adjacent interferences and poor topological adaptability. To avoid these problems, a robust, topology adaptive tree-like structure skeleton extraction framework is proposed in this paper. Specifically, to avoid the adjacent interferences, a local message passing procedure called Gaussian affinity voting (GAV) is proposed to realize adaptive scale-growing of vessel voxels. Then the medialness measuring function (MMF) based on GAV, namely GAV-MMF, is constructed to extract medialness patterns robustly. In order to improve topological adaptability, a level-set graph embedded with GAV-MMF is employed to build initial curve skeletons without any user interaction. Furthermore, the GAV-MMF is embedded in stretching open active contours (SOAC) to drive the initial curves to the expected location, maintaining smoothness and continuity. In addition, to provide an accurate and smooth final skeleton tree topology, topological checks and skeleton network reconfiguration is proposed. The continuity and scalability of this method is validated experimentally on synthetic and clinical images for multi-scale vessels. Experimental results show that the proposed method achieves acceptable topological adaptability for skeleton extraction of vessel trees.

  12. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    PubMed

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  13. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    PubMed

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database. PMID:25680220

  14. Local rigid registration for multimodal texture feature extraction from medical images

    NASA Astrophysics Data System (ADS)

    Steger, Sebastian

    2011-03-01

    The joint extraction of texture features from medical images of different modalities requires an accurate image registration at the target structures. In many cases rigid registration of the entire images does not achieve the desired accuracy whereas deformable registration is too complex and may result in undesired deformations. This paper presents a novel region of interest alignment approach based on local rigid registration enabling image fusion for multimodal texture feature extraction. First rigid registration on the entire images is performed to obtain an initial guess. Then small cubic regions around the target structure are clipped from all images and individually rigidly registered. The approach was applied to extract texture features in clinically acquired CT and MR images from lymph nodes in the oropharynx for an oral cancer reoccurrence prediction framework. Visual inspection showed that in all of the 30 cases at least a subtle misalignment was perceivable for the globally rigidly aligned images. After applying the presented approach the alignment of the target structure significantly improved in 19 cases. In 12 cases no alignment mismatch whatsoever was perceptible without requiring the complexity of deformable registration and without deforming the target structure. Further investigation showed that if the resolutions of the individual modalities differ significantly, partial volume effects occur, diminishing the significance of the multimodal features even for perfectly aligned images.

  15. Object-Based Arctic Sea Ice Feature Extraction through High Spatial Resolution Aerial photos

    NASA Astrophysics Data System (ADS)

    Miao, X.; Xie, H.

    2015-12-01

    High resolution aerial photographs used to detect and classify sea ice features can provide accurate physical parameters to refine, validate, and improve climate models. However, manually delineating sea ice features, such as melt ponds, submerged ice, water, ice/snow, and pressure ridges, is time-consuming and labor-intensive. An object-based classification algorithm is developed to automatically extract sea ice features efficiently from aerial photographs taken during the Chinese National Arctic Research Expedition in summer 2010 (CHINARE 2010) in the MIZ near the Alaska coast. The algorithm includes four steps: (1) the image segmentation groups the neighboring pixels into objects based on the similarity of spectral and textural information; (2) the random forest classifier distinguishes four general classes: water, general submerged ice (GSI, including melt ponds and submerged ice), shadow, and ice/snow; (3) the polygon neighbor analysis separates melt ponds and submerged ice based on spatial relationship; and (4) pressure ridge features are extracted from shadow based on local illumination geometry. The producer's accuracy of 90.8% and user's accuracy of 91.8% are achieved for melt pond detection, and shadow shows a user's accuracy of 88.9% and producer's accuracies of 91.4%. Finally, pond density, pond fraction, ice floes, mean ice concentration, average ridge height, ridge profile, and ridge frequency are extracted from batch processing of aerial photos, and their uncertainties are estimated.

  16. Dynamic-Feature Extraction, Attribution and Reconstruction (DEAR) Method for Power System Model Reduction

    SciTech Connect

    Wang, Shaobu; Lu, Shuai; Zhou, Ning; Lin, Guang; Elizondo, Marcelo A.; Pai, M. A.

    2014-09-04

    In interconnected power systems, dynamic model reduction can be applied on generators outside the area of interest to mitigate the computational cost with transient stability studies. This paper presents an approach of deriving the reduced dynamic model of the external area based on dynamic response measurements, which comprises of three steps, dynamic-feature extraction, attribution and reconstruction (DEAR). In the DEAR approach, a feature extraction technique, such as singular value decomposition (SVD), is applied to the measured generator dynamics after a disturbance. Characteristic generators are then identified in the feature attribution step for matching the extracted dynamic features with the highest similarity, forming a suboptimal ‘basis’ of system dynamics. In the reconstruction step, generator state variables such as rotor angles and voltage magnitudes are approximated with a linear combination of the characteristic generators, resulting in a quasi-nonlinear reduced model of the original external system. Network model is un-changed in the DEAR method. Tests on several IEEE standard systems show that the proposed method gets better reduction ratio and response errors than the traditional coherency aggregation methods.

  17. Mine detection using variational methods for image enhancement and feature extraction

    NASA Astrophysics Data System (ADS)

    Szymczak, William G.; Guo, Weiming; Rogers, Joel Clark W.

    1998-09-01

    A critical part of automatic classification algorithms is the extraction of features which distinguish targets from background noise and clutter. The focus of this paper is the use of variational methods for improving the classification of sea mines from both side-scan sonar and laser line-scan images. These methods are based on minimizing a functional of the image intensity. Examples include Total Variation Minimization (TVM) which is very effective for reducing the noise of an image without compromising its edge features, and Mumford-Shah segmentation, which in its simplest form, provides an optimal piecewise constant partition of the image. For the sonar side-scan images it is shown that a combination of these two variational methods, (first reducing the noise using TVM, then using segmentation) outperforms the use of either one individually for the extraction of minelike features. Multichannel segmentation based on a wavelet decomposition is also effectively used to declutter a sonar image. Finally, feature extraction and classification using segmentation is demonstrated on laser line-scan images of mines in a cluttered sea floor.

  18. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  19. Road and Roadside Feature Extraction Using Imagery and LIDAR Data for Transportation Operation

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.; Romero, M. A.; Tarko, A.

    2015-03-01

    Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  20. A new procedure for extracting fault feature of multi-frequency signal from rotating machinery

    NASA Astrophysics Data System (ADS)

    Xiong, Xin; Yang, Shixi; Gan, Chunbiao

    2012-10-01

    Modern rotating machinery is built as a multi-rotor and multi-bearing system, and complex factors from rub or misalignment fault, etc., can lead to high nonlinearity of the system and non-stationarity of vibration signals. As a wide spectrum of frequency components is likely generated due to these complex factors, feature extraction becomes very important for fault diagnosis of a rotor system, e.g., rotor-to-stator rub and rotor misalignment. In recent years, the Hilbert-Huang transform (HHT), combining the empirical mode decomposition (EMD) algorithm with the Hilbert transform (HT) is commonly used in vibration signal analysis and also turns out to be very effective in dealing with non-stationary signals. Nevertheless, most intrinsic mode functions (IMFs) from the EMD are multi-frequency, and the extracted instantaneous frequency (IF) curves usually show irregularities, which raises difficulty in interpreting these features of the signal by the HHT spectrogram. In this study, a new procedure, combining the customary HHT with a fourth-order spectral analysis tool named Kurtogram, is developed to extract high-frequency features from several kinds of faulty signals, where the Kurtogram is applied to locate the non-stationary intra- and inter-wave modulation components in the original signals and produce more monochromatic IMFs. It is shown that the newly developed feature extraction procedure can accurately detect and characterize the fault feature information hidden in a multi-frequency signal, which is validated by a rub test from a rotor-bearing assembly and a misalignment signal test from a turbo-compressor machine set.

  1. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features

    NASA Astrophysics Data System (ADS)

    Li, Bangyu; Zhang, Hui; Xu, Fanjiang

    2014-03-01

    This paper addresses the problem of water extraction from high resolution remote sensing images (including R, G, B, and NIR channels), which draws considerable attention in recent years. Previous work on water extraction mainly faced two difficulties. 1) It is difficult to obtain accurate position of water boundary because of using low resolution images. 2) Like all other image based object classification problems, the phenomena of "different objects same image" or "different images same object" affects the water extraction. Shadow of elevated objects (e.g. buildings, bridges, towers and trees) scattered in the remote sensing image is a typical noise objects for water extraction. In many cases, it is difficult to discriminate between water and shadow in a remote sensing image, especially in the urban region. We propose a water extraction method with two hierarchies: the statistical feature of spectral characteristic based on image segmentation and the shape feature based on shadow removing. In the first hierarchy, the Statistical Region Merging (SRM) algorithm is adopted for image segmentation. The SRM includes two key steps: one is sorting adjacent regions according to a pre-ascertained sort function, and the other one is merging adjacent regions based on a pre-ascertained merging predicate. The sort step is done one time during the whole processing without considering changes caused by merging which may cause imprecise results. Therefore, we modify the SRM with dynamic sort processing, which conducts sorting step repetitively when there is large adjacent region changes after doing merging. To achieve robust segmentation, we apply the merging region with six features (four remote sensing image bands, Normalized Difference Water Index (NDWI), and Normalized Saturation-value Difference Index (NSVDI)). All these features contribute to segment image into region of object. NDWI and NSVDI are discriminate between water and some shadows. In the second hierarchy, we adopt

  2. High Resolution Urban Feature Extraction for Global Population Mapping using High Performance Computing

    SciTech Connect

    Vijayaraj, Veeraraghavan; Bright, Eddie A; Bhaduri, Budhendra L

    2007-01-01

    The advent of high spatial resolution satellite imagery like Quick Bird (0.6 meter) and IKONOS (1 meter) has provided a new data source for high resolution urban land cover mapping. Extracting accurate urban regions from high resolution images has many applications and is essential to the population mapping efforts of Oak Ridge National Laboratory's (ORNL) LandScan population distribution program. This paper discusses an automated parallel algorithm that has been implemented on a high performance computing environment to extract urban regions from high resolution images using texture and spectral features

  3. Extraction of informative cell features by segmentation of densely clustered tissue images.

    PubMed

    Kothari, Sonal; Chaudry, Qaiser; Wang, May D

    2009-01-01

    This paper presents a fast methodology for the estimation of informative cell features from densely clustered RGB tissue images. The features estimated include nuclei count, nuclei size distribution, nuclei eccentricity (roundness) distribution, nuclei closeness distribution and cluster size distribution. Our methodology is a three step technique. Firstly, we generate a binary nuclei mask from an RGB tissue image by color segmentation. Secondly, we segment nuclei clusters present in the binary mask into individual nuclei by concavity detection and ellipse fitting. Finally, we estimate informative features for every nuclei and their distribution for the complete image. The main focus of our work is the development of a fast and accurate nuclei cluster segmentation technique for densely clustered tissue images. We also developed a simple graphical user interface (GUI) for our application which requires minimal user interaction and can efficiently extract features from nuclei clusters, making it feasible for clinical applications (less than 2 minutes for a 1.9 megapixel tissue image).

  4. The effects of compressive sensing on extracted features from tri-axial swallowing accelerometry signals

    NASA Astrophysics Data System (ADS)

    Sejdić, Ervin; Movahedi, Faezeh; Zhang, Zhenwei; Kurosu, Atsuko; Coyle, James L.

    2016-05-01

    Acquiring swallowing accelerometry signals using a comprehensive sensing scheme may be a desirable approach for monitoring swallowing safety for longer periods of time. However, it needs to be insured that signal characteristics can be recovered accurately from compressed samples. In this paper, we considered this issue by examining the effects of the number of acquired compressed samples on the calculated swallowing accelerometry signal features. We used tri-axial swallowing accelerometry signals acquired from seventeen stroke patients (106 swallows in total). From acquired signals, we extracted typically considered signal features from time, frequency and time-frequency domains. Next, we compared these features from the original signals (sampled using traditional sampling schemes) and compressively sampled signals. Our results have shown we can obtain accurate estimates of signal features even by using only a third of original samples.

  5. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    NASA Astrophysics Data System (ADS)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  6. Extracting the driving force from ozone data using slow feature analysis

    NASA Astrophysics Data System (ADS)

    Wang, Geli; Yang, Peicai; Zhou, Xiuji

    2016-05-01

    Slow feature analysis (SFA) is a recommended technique for extracting slowly varying features from a quickly varying signal. In this work, we apply SFA to total ozone data from Arosa, Switzerland. The results show that the signal of volcanic eruptions can be found in the driving force, and wavelet analysis of this driving force shows that there are two main dominant scales, which may be connected with the effect of climate mode such as North Atlantic Oscillation (NAO) and solar activity. The findings of this study represent a contribution to our understanding of the causality from observed climate data.

  7. Complex Biological Event Extraction from Full Text using Signatures of Linguistic and Semantic Features

    SciTech Connect

    McGrath, Liam R.; Domico, Kelly O.; Corley, Courtney D.; Webb-Robertson, Bobbie-Jo M.

    2011-06-24

    Building on technical advances from the BioNLP 2009 Shared Task Challenge, the 2011 challenge sets forth to generalize techniques to other complex biological event extraction tasks. In this paper, we present the implementation and evaluation of a signature-based machine-learning technique to predict events from full texts of infectious disease documents. Specifically, our approach uses novel signatures composed of traditional linguistic features and semantic knowledge to predict event triggers and their candidate arguments. Using a leave-one out analysis, we report the contribution of linguistic and shallow semantic features in the trigger prediction and candidate argument extraction. Lastly, we examine evaluations and posit causes for errors of infectious disease track subtasks.

  8. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  9. Feature Extraction of One-step Ahead Daily Maximum Load with Regression Tree

    NASA Astrophysics Data System (ADS)

    Mori, Hiroyuki; Sakatani, Yoshinori; Fujino, Tatsurou; Numa, Kazuyuki

    In this paper, a new efficient feature extraction method is proposed to handle the one-step ahead daily maximum load forecasting. In recent years, power systems become more complicated under the deregulated and competitive environment. As a result, it is not easy to understand the cause and effect of short-term load forecasting with a bunch of data. This paper analyzes load data from a standpoint of data mining. By it we mean a technique that finds out rules or knowledge through large database. As a data mining method for load forecasting, this paper focuses on the regression tree that handles continuous variables and expresses a knowledge rule as if-then rules. Investigating the variable importance of the regression tree gives information on the transition of the load forecasting models. This paper proposes a feature extraction method for examining the variable importance. The proposed method allows to classify the transition of the variable importance through actual data.

  10. BDPCA plus LDA: a novel fast feature extraction technique for face recognition.

    PubMed

    Zuo, Wangmeng; Zhang, David; Yang, Jian; Wang, Kuanquan

    2006-08-01

    Appearance-based methods, especially linear discriminant analysis (LDA), have been very successful in facial feature extraction, but the recognition performance of LDA is often degraded by the so-called "small sample size" (SSS) problem. One popular solution to the SSS problem is principal component analysis (PCA) + LDA (Fisherfaces), but the LDA in other low-dimensional subspaces may be more effective. In this correspondence, we proposed a novel fast feature extraction technique, bidirectional PCA (BDPCA) plus LDA (BDPCA + LDA), which performs an LDA in the BDPCA subspace. Two face databases, the ORL and the Facial Recognition Technology (FERET) databases, are used to evaluate BDPCA + LDA. Experimental results show that BDPCA + LDA needs less computational and memory requirements and has a higher recognition accuracy than PCA + LDA.

  11. Constructing New Biorthogonal Wavelet Type which Matched for Extracting the Iris Image Features

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.; Suhardjo; Susanto, Adhi

    2013-04-01

    Some former research have been made for obtaining a new type of wavelet. In case of iris recognition using orthogonal or biorthogonal wavelets, it had been obtained that Haar filter is most suitable to recognize the iris image. However, designing the new wavelet should be done to find a most matched wavelet to extract the iris image features, for which we can easily apply it for identification, recognition, or authentication purposes. In this research, a new biorthogonal wavelet was designed based on Haar filter properties and Haar's orthogonality conditions. As result, it can be obtained a new biorthogonal 5/7 filter type wavelet which has a better than other types of wavelets, including Haar, to extract the iris image features based on its mean-squared error (MSE) and Euclidean distance parameters.

  12. A Study of Various Feature Extraction Methods on a Motor Imagery Based Brain Computer Interface System

    PubMed Central

    Resalat, Seyed Navid; Saba, Valiallah

    2016-01-01

    Introduction: Brain Computer Interface (BCI) systems based on Movement Imagination (MI) are widely used in recent decades. Separate feature extraction methods are employed in the MI data sets and classified in Virtual Reality (VR) environments for real-time applications. Methods: This study applied wide variety of features on the recorded data using Linear Discriminant Analysis (LDA) classifier to select the best feature sets in the offline mode. The data set was recorded in 3-class tasks of the left hand, the right hand, and the foot motor imagery. Results: The experimental results showed that Auto-Regressive (AR), Mean Absolute Value (MAV), and Band Power (BP) features have higher accuracy values,75% more than those for the other features. Discussion: These features were selected for the designed real-time navigation. The corresponding results revealed the subject-specific nature of the MI-based BCI system; however, the Power Spectral Density (PSD) based α-BP feature had the highest averaged accuracy. PMID:27303595

  13. Automatic geomorphic feature extraction from lidar in flat and engineered landscapes

    NASA Astrophysics Data System (ADS)

    Passalacqua, Paola; Belmont, Patrick; Foufoula-Georgiou, Efi

    2012-03-01

    High-resolution topographic data derived from light detection and ranging (lidar) technology enables detailed geomorphic observations to be made on spatially extensive areas in a way that was previously not possible. Availability of this data provides new opportunities to study the spatial organization of landscapes and channel network features, increase the accuracy of environmental transport models, and inform decisions for targeting conservation practices. However, with the opportunity of increased resolution topographic data come formidable challenges in terms of automatic geomorphic feature extraction, analysis, and interpretation. Low-relief landscapes are particularly challenging because topographic gradients are low, and in many places both the landscape and the channel network have been heavily modified by humans. This is especially true for agricultural landscapes, which dominate the midwestern United States. The goal of this work is to address several issues related to feature extraction in flat lands by using GeoNet, a recently developed method based on nonlinear multiscale filtering and geodesic optimization for automatic extraction of geomorphic features (channel heads and channel networks) from high-resolution topographic data. Here we test the ability of GeoNet to extract channel networks in flat and human-impacted landscapes using 3 m lidar data for the Le Sueur River Basin, a 2880 km2 subbasin of the Minnesota River Basin. We propose a curvature analysis to differentiate between channels and manmade structures that are not part of the river network, such as roads and bridges. We document that Laplacian curvature more effectively distinguishes channels in flat, human-impacted landscapes compared with geometric curvature. In addition, we develop a method for performing automated channel morphometric analysis including extraction of cross sections, detection of bank locations, and identification of geomorphic bankfull water surface elevation. Using

  14. Multi range spectral feature fitting for hyperspectral imagery in extracting oilseed rape planting area

    NASA Astrophysics Data System (ADS)

    Pan, Zhuokun; Huang, Jingfeng; Wang, Fumin

    2013-12-01

    Spectral feature fitting (SFF) is a commonly used strategy for hyperspectral imagery analysis to discriminate ground targets. Compared to other image analysis techniques, SFF does not secure higher accuracy in extracting image information in all circumstances. Multi range spectral feature fitting (MRSFF) from ENVI software allows user to focus on those interesting spectral features to yield better performance. Thus spectral wavelength ranges and their corresponding weights must be determined. The purpose of this article is to demonstrate the performance of MRSFF in oilseed rape planting area extraction. A practical method for defining the weighted values, the variance coefficient weight method, was proposed to set up criterion. Oilseed rape field canopy spectra from the whole growth stage were collected prior to investigating its phenological varieties; oilseed rape endmember spectra were extracted from the Hyperion image as identifying samples to be used in analyzing the oilseed rape field. Wavelength range divisions were determined by the difference between field-measured spectra and image spectra, and image spectral variance coefficient weights for each wavelength range were calculated corresponding to field-measured spectra from the closest date. By using MRSFF, wavelength ranges were classified to characterize the target's spectral features without compromising spectral profile's entirety. The analysis was substantially successful in extracting oilseed rape planting areas (RMSE ≤ 0.06), and the RMSE histogram indicated a superior result compared to a conventional SFF. Accuracy assessment was based on the mapping result compared with spectral angle mapping (SAM) and the normalized difference vegetation index (NDVI). The MRSFF yielded a robust, convincible result and, therefore, may further the use of hyperspectral imagery in precision agriculture.

  15. Modular implementation of feature extraction and matching algorithms for photogrammetric stereo imagery

    NASA Astrophysics Data System (ADS)

    Kershaw, James; Hamlyn, Garry

    1994-06-01

    This paper describes the implementation of algorithms for automatically extracting and matching features in stereo pairs of images. The implementation has been designed to be as modular as possible to allow different algorithms for each stage in the matching process to be combined in the most appropriate manner for each particular problem. The modules have been implemented in the AVS environment but are designed to be portable to any platform. This work has been undertaken as part of task DEF 93/1 63 'Intelligence Analysis of Imagery', and forms part of ITD's contribution to the Visual Processing research program in the Centre for Sensor System and Information Processing. A major aim of both the task and the research program is to produce software to assist intelligence analysts in extracting three dimensional shape from imagery: the algorithms and software described here will form the first part of a module for automatically extracting depth information from stereo image pairs.

  16. Belt-oriented RADON transform and its application to extracting features from high-resolution remotely sensed images

    NASA Astrophysics Data System (ADS)

    Wang, Ruifu; Zhang, Jie; Huang, Jianbo; Chen, Miao; Leng, Xiuhua

    2003-05-01

    Theory of belt-oriented RADON transform is developed, and applied to extracting features from high-resolution remote sense images. By several typical experiments it is proved that belt-oriented Radon transform is powerful in extracting belt features, but Radon transform (line-oriented Radon transform called by author) is unavailable.

  17. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product.

  18. Extracting product features and opinion words using pattern knowledge in customer reviews.

    PubMed

    Htay, Su Su; Lynn, Khin Thidar

    2013-01-01

    Due to the development of e-commerce and web technology, most of online Merchant sites are able to write comments about purchasing products for customer. Customer reviews expressed opinion about products or services which are collectively referred to as customer feedback data. Opinion extraction about products from customer reviews is becoming an interesting area of research and it is motivated to develop an automatic opinion mining application for users. Therefore, efficient method and techniques are needed to extract opinions from reviews. In this paper, we proposed a novel idea to find opinion words or phrases for each feature from customer reviews in an efficient way. Our focus in this paper is to get the patterns of opinion words/phrases about the feature of product from the review text through adjective, adverb, verb, and noun. The extracted features and opinions are useful for generating a meaningful summary that can provide significant informative resource to help the user as well as merchants to track the most suitable choice of product. PMID:24459430

  19. Local intensity feature tracking and motion modeling for respiratory signal extraction in cone beam CT projections.

    PubMed

    Dhou, Salam; Motai, Yuichi; Hugo, Geoffrey D

    2013-02-01

    Accounting for respiration motion during imaging can help improve targeting precision in radiation therapy. We propose local intensity feature tracking (LIFT), a novel markerless breath phase sorting method in cone beam computed tomography (CBCT) scan images. The contributions of this study are twofold. First, LIFT extracts the respiratory signal from the CBCT projections of the thorax depending only on tissue feature points that exhibit respiration. Second, the extracted respiratory signal is shown to correlate with standard respiration signals. LIFT extracts feature points in the first CBCT projection of a sequence and tracks those points in consecutive projections forming trajectories. Clustering is applied to select trajectories showing an oscillating behavior similar to the breath motion. Those "breathing" trajectories are used in a 3-D reconstruction approach to recover the 3-D motion of the lung which represents the respiratory signal. Experiments were conducted on datasets exhibiting regular and irregular breathing patterns. Results showed that LIFT-based respiratory signal correlates with the diaphragm position-based signal with an average phase shift of 1.68 projections as well as with the internal marker-based signal with an average phase shift of 1.78 projections. LIFT was able to detect the respiratory signal in all projections of all datasets.

  20. Feature extraction and classification for EEG signals using wavelet transform and machine learning techniques.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Ahmad, Rana Fayyaz; Badruddin, Nasreen; Kamel, Nidal; Hussain, Muhammad; Chooi, Weng-Tink

    2015-03-01

    This paper describes a discrete wavelet transform-based feature extraction scheme for the classification of EEG signals. In this scheme, the discrete wavelet transform is applied on EEG signals and the relative wavelet energy is calculated in terms of detailed coefficients and the approximation coefficients of the last decomposition level. The extracted relative wavelet energy features are passed to classifiers for the classification purpose. The EEG dataset employed for the validation of the proposed method consisted of two classes: (1) the EEG signals recorded during the complex cognitive task--Raven's advance progressive metric test and (2) the EEG signals recorded in rest condition--eyes open. The performance of four different classifiers was evaluated with four performance measures, i.e., accuracy, sensitivity, specificity and precision values. The accuracy was achieved above 98 % by the support vector machine, multi-layer perceptron and the K-nearest neighbor classifiers with approximation (A4) and detailed coefficients (D4), which represent the frequency range of 0.53-3.06 and 3.06-6.12 Hz, respectively. The findings of this study demonstrated that the proposed feature extraction approach has the potential to classify the EEG signals recorded during a complex cognitive task by achieving a high accuracy rate.

  1. Breast cancer mitosis detection in histopathological images with spatial feature extraction

    NASA Astrophysics Data System (ADS)

    Albayrak, Abdülkadir; Bilgin, Gökhan

    2013-12-01

    In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.

  2. Fine-Grain Feature Extraction from Malware's Scan Behavior Based on Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Eto, Masashi; Sonoda, Kotaro; Inoue, Daisuke; Yoshioka, Katsunari; Nakao, Koji

    Network monitoring systems that detect and analyze malicious activities as well as respond against them, are becoming increasingly important. As malwares, such as worms, viruses, and bots, can inflict significant damages on both infrastructure and end user, technologies for identifying such propagating malwares are in great demand. In the large-scale darknet monitoring operation, we can see that malwares have various kinds of scan patterns that involves choosing destination IP addresses. Since many of those oscillations seemed to have a natural periodicity, as if they were signal waveforms, we considered to apply a spectrum analysis methodology so as to extract a feature of malware. With a focus on such scan patterns, this paper proposes a novel concept of malware feature extraction and a distinct analysis method named “SPectrum Analysis for Distinction and Extraction of malware features(SPADE)”. Through several evaluations using real scan traffic, we show that SPADE has the significant advantage of recognizing the similarities and dissimilarities between the same and different types of malwares.

  3. Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua

    2013-01-01

    Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis.

  4. A new method to extract stable feature points based on self-generated simulation images

    NASA Astrophysics Data System (ADS)

    Long, Fei; Zhou, Bin; Ming, Delie; Tian, Jinwen

    2015-10-01

    Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.

  5. DBSCAN-based ROI extracted from SAR images and the discrimination of multi-feature ROI

    NASA Astrophysics Data System (ADS)

    He, Xin Yi; Zhao, Bo; Tan, Shu Run; Zhou, Xiao Yang; Jiang, Zhong Jin; Cui, Tie Jun

    2009-10-01

    The purpose of the paper is to extract the region of interest (ROI) from the coarse detected synthetic aperture radar (SAR) images and discriminate if the ROI contains a target or not, so as to eliminate the false alarm, and prepare for the target recognition. The automatic target clustering is one of the most difficult tasks in the SAR-image automatic target recognition system. The density-based spatial clustering of applications with noise (DBSCAN) relies on a density-based notion of clusters which is designed to discover clusters of arbitrary shape. DBSCAN was first used in the SAR image processing, which has many excellent features: only two insensitivity parameters (radius of neighborhood and minimum number of points) are needed; clusters of arbitrary shapes which fit in with the coarse detected SAR images can be discovered; and the calculation time and memory can be reduced. In the multi-feature ROI discrimination scheme, we extract several target features which contain the geometry features such as the area discriminator and Radon-transform based target profile discriminator, the distribution characteristics such as the EFF discriminator, and the EM scattering property such as the PPR discriminator. The synthesized judgment effectively eliminates the false alarms.

  6. A Novel Approach Based on Data Redundancy for Feature Extraction of EEG Signals.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Kamel, Nidal; Hussain, Muhammad

    2016-03-01

    Feature extraction and classification for electroencephalogram (EEG) in medical applications is a challenging task. The EEG signals produce a huge amount of redundant data or repeating information. This redundancy causes potential hurdles in EEG analysis. Hence, we propose to use this redundant information of EEG as a feature to discriminate and classify different EEG datasets. In this study, we have proposed a JPEG2000 based approach for computing data redundancy from multi-channels EEG signals and have used the redundancy as a feature for classification of EEG signals by applying support vector machine, multi-layer perceptron and k-nearest neighbors classifiers. The approach is validated on three EEG datasets and achieved high accuracy rate (95-99 %) in the classification. Dataset-1 includes the EEG signals recorded during fluid intelligence test, dataset-2 consists of EEG signals recorded during memory recall test, and dataset-3 has epileptic seizure and non-seizure EEG. The findings demonstrate that the approach has the ability to extract robust feature and classify the EEG signals in various applications including clinical as well as normal EEG patterns.

  7. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    NASA Astrophysics Data System (ADS)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  8. Extracting features buried within high density atom probe point cloud data through simplicial homology.

    PubMed

    Srinivasan, Srikant; Kaluskar, Kaustubh; Broderick, Scott; Rajan, Krishna

    2015-12-01

    Feature extraction from Atom Probe Tomography (APT) data is usually performed by repeatedly delineating iso-concentration surfaces of a chemical component of the sample material at different values of concentration threshold, until the user visually determines a satisfactory result in line with prior knowledge. However, this approach allows for important features, buried within the sample, to be visually obscured by the high density and volume (~10(7) atoms) of APT data. This work provides a data driven methodology to objectively determine the appropriate concentration threshold for classifying different phases, such as precipitates, by mapping the topology of the APT data set using a concept from algebraic topology termed persistent simplicial homology. A case study of Sc precipitates in an Al-Mg-Sc alloy is presented demonstrating the power of this technique to capture features, such as precise demarcation of Sc clusters and Al segregation at the cluster boundaries, not easily available by routine visual adjustment.

  9. Automated Feature Extraction in Brain Tumor by Magnetic Resonance Imaging Using Gaussian Mixture Models

    PubMed Central

    Chaddad, Ahmad

    2015-01-01

    This paper presents a novel method for Glioblastoma (GBM) feature extraction based on Gaussian mixture model (GMM) features using MRI. We addressed the task of the new features to identify GBM using T1 and T2 weighted images (T1-WI, T2-WI) and Fluid-Attenuated Inversion Recovery (FLAIR) MR images. A pathologic area was detected using multithresholding segmentation with morphological operations of MR images. Multiclassifier techniques were considered to evaluate the performance of the feature based scheme in terms of its capability to discriminate GBM and normal tissue. GMM features demonstrated the best performance by the comparative study using principal component analysis (PCA) and wavelet based features. For the T1-WI, the accuracy performance was 97.05% (AUC = 92.73%) with 0.00% missed detection and 2.95% false alarm. In the T2-WI, the same accuracy (97.05%, AUC = 91.70%) value was achieved with 2.95% missed detection and 0.00% false alarm. In FLAIR mode the accuracy decreased to 94.11% (AUC = 95.85%) with 0.00% missed detection and 5.89% false alarm. These experimental results are promising to enhance the characteristics of heterogeneity and hence early treatment of GBM. PMID:26136774

  10. Adaptive homochromous disturbance elimination and feature selection based mean-shift vehicle tracking method

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Lei, Bo; Hong, Pu; Wang, Chensheng

    2011-11-01

    This paper introduces a novel method to adaptively diminish the effects of disturbance in the airborne camera shooting traffic video. Based on the moving vector of the tracked vehicle, a search area in the next frame is predicted, which is the area of interest (AOI) to the mean-shift method. Background color estimation is performed according to the previous tracking, which is used to judge whether there is possible disturbance in the predicted search area in the next frame. Without disturbance, the difference image of vehicle and background could be used as input features to the mean-shift algorithm; with disturbance, the histogram of colors in the predict area is calculated to find the most and second disturbing color. Experiments proved this method could diminish or eliminate the effects of homochromous disturbance and lead to more precise and more robust tracking.

  11. Non-linear feature extraction from HRV signal for mortality prediction of ICU cardiovascular patient.

    PubMed

    Karimi Moridani, Mohammad; Setarehdan, Seyed Kamaledin; Motie Nasrabadi, Ali; Hajinasrollah, Esmaeil

    2016-01-01

    Intensive care unit (ICU) patients are at risk of in-ICU morbidities and mortality, making specific systems for identifying at-risk patients a necessity for improving clinical care. This study presents a new method for predicting in-hospital mortality using heart rate variability (HRV) collected from the times of a patient's ICU stay. In this paper, a HRV time series processing based method is proposed for mortality prediction of ICU cardiovascular patients. HRV signals were obtained measuring R-R time intervals. A novel method, named return map, is then developed that reveals useful information from the HRV time series. This study also proposed several features that can be extracted from the return map, including the angle between two vectors, the area of triangles formed by successive points, shortest distance to 45° line and their various combinations. Finally, a thresholding technique is proposed to extract the risk period and to predict mortality. The data used to evaluate the proposed algorithm obtained from 80 cardiovascular ICU patients, from the first 48 h of the first ICU stay of 40 males and 40 females. This study showed that the angle feature has on average a sensitivity of 87.5% (with 12 false alarms), the area feature has on average a sensitivity of 89.58% (with 10 false alarms), the shortest distance feature has on average a sensitivity of 85.42% (with 14 false alarms) and, finally, the combined feature has on average a sensitivity of 92.71% (with seven false alarms). The results showed that the last half an hour before the patient's death is very informative for diagnosing the patient's condition and to save his/her life. These results confirm that it is possible to predict mortality based on the features introduced in this paper, relying on the variations of the HRV dynamic characteristics.

  12. Speech recognition in reverberant and noisy environments employing multiple feature extractors and i-vector speaker adaptation

    NASA Astrophysics Data System (ADS)

    Alam, Md Jahangir; Gupta, Vishwa; Kenny, Patrick; Dumouchel, Pierre

    2015-12-01

    The REVERB challenge provides a common framework for the evaluation of feature extraction techniques in the presence of both reverberation and additive background noise. State-of-the-art speech recognition systems perform well in controlled environments, but their performance degrades in realistic acoustical conditions, especially in real as well as simulated reverberant environments. In this contribution, we utilize multiple feature extractors including the conventional mel-filterbank, multi-taper spectrum estimation-based mel-filterbank, robust mel and compressive gammachirp filterbank, iterative deconvolution-based dereverberated mel-filterbank, and maximum likelihood inverse filtering-based dereverberated mel-frequency cepstral coefficient features for speech recognition with multi-condition training data. In order to improve speech recognition performance, we combine their results using ROVER (Recognizer Output Voting Error Reduction). For two- and eight-channel tasks, to get benefited from the multi-channel data, we also use ROVER, instead of the multi-microphone signal processing method, to reduce word error rate by selecting the best scoring word at each channel. As in a previous work, we also apply i-vector-based speaker adaptation which was found effective. In speech recognition task, speaker adaptation tries to reduce mismatch between the training and test speakers. Speech recognition experiments are conducted on the REVERB challenge 2014 corpora using the Kaldi recognizer. In our experiments, we use both utterance-based batch processing and full batch processing. In the single-channel task, full batch processing reduced word error rate (WER) from 10.0 to 9.3 % on SimData as compared to utterance-based batch processing. Using full batch processing, we obtained an average WER of 9.0 and 23.4 % on the SimData and RealData, respectively, for the two-channel task, whereas for the eight-channel task on the SimData and RealData, the average WERs found were 8

  13. Temporal features of postural adaptation strategy to prolonged and repeatable balance perturbation.

    PubMed

    Schmid, Micaela; Sozzi, Stefania

    2016-08-15

    Aim of this study was to get insight into the features of the postural adaptation process, occurring during a continuous 3-min and 0.6Hz horizontal sinusoidal oscillation of the body support base. We hypothesized an ongoing temporal organization of the balancing strategy that gradually becomes fine-tuned and more coordinated with the platform movement. The trial was divided into oscillation cycles and for each cycle: leg muscles activity and temporal relationship between Centre of Mass and Centre of Pressure A-P position were analyzed. The results of each cycle were grouped in time-windows of 10 successive cycles (time windows of 16.6s). Muscle activity was initially prominent and diminished progressively. The major burst of Tibialis Anterior (TA) muscle always occurred at the same time instant of the platform oscillation cycle, in advance with respect to the platform posterior turning point. This burst produced a body forward rotation that was delayed throughout the task. During prolonged and repeatable balance perturbation, an ongoing postural adaptation process occurs. When the effects of the perturbation become predictable, the CNS scales the level of muscle activity to counteracting the destabilizing effects of the perturbations. Furthermore, the CNS tunes the kinematics and the kinetic responses optimally by slightly delaying the onset of the body forward rotation, maintaining unchanged the time-pattern of postural muscle activation. PMID:27291456

  14. Temporal features of postural adaptation strategy to prolonged and repeatable balance perturbation.

    PubMed

    Schmid, Micaela; Sozzi, Stefania

    2016-08-15

    Aim of this study was to get insight into the features of the postural adaptation process, occurring during a continuous 3-min and 0.6Hz horizontal sinusoidal oscillation of the body support base. We hypothesized an ongoing temporal organization of the balancing strategy that gradually becomes fine-tuned and more coordinated with the platform movement. The trial was divided into oscillation cycles and for each cycle: leg muscles activity and temporal relationship between Centre of Mass and Centre of Pressure A-P position were analyzed. The results of each cycle were grouped in time-windows of 10 successive cycles (time windows of 16.6s). Muscle activity was initially prominent and diminished progressively. The major burst of Tibialis Anterior (TA) muscle always occurred at the same time instant of the platform oscillation cycle, in advance with respect to the platform posterior turning point. This burst produced a body forward rotation that was delayed throughout the task. During prolonged and repeatable balance perturbation, an ongoing postural adaptation process occurs. When the effects of the perturbation become predictable, the CNS scales the level of muscle activity to counteracting the destabilizing effects of the perturbations. Furthermore, the CNS tunes the kinematics and the kinetic responses optimally by slightly delaying the onset of the body forward rotation, maintaining unchanged the time-pattern of postural muscle activation.

  15. Extraction of spatial features in hyperspectral images based on the analysis of differential attribute profiles

    NASA Astrophysics Data System (ADS)

    Falco, Nicola; Benediktsson, Jon A.; Bruzzone, Lorenzo

    2013-10-01

    The new generation of hyperspectral sensors can provide images with a high spectral and spatial resolution. Recent improvements in mathematical morphology have developed new techniques such as the Attribute Profiles (APs) and the Extended Attribute Profiles (EAPs) that can effectively model the spatial information in remote sensing images. The main drawbacks of these techniques is the selection of the optimal range of values related to the family of criteria adopted to each filter step, and the high dimensionality of the profiles, which results in a very large number of features and therefore provoking the Hughes phenomenon. In this work, we focus on addressing the dimensionality issue, which leads to an highly intrinsic information redundancy, proposing a novel strategy for extracting spatial information from hyperspectral images based on the analysis of the Differential Attribute Profiles (DAPs). A DAP is generated by computing the derivative of the AP; it shows at each level the residual between two adjacent levels of the AP. By analyzing the multilevel behavior of the DAP, it is possible to extract geometrical features corresponding to the structures within the scene at different scales. Our proposed approach consists of two steps: 1) a homogeneity measurement is used to identify the level L in which a given pixel belongs to a region with a physical meaning; 2) the geometrical information of the extracted regions is fused into a single map considering their level L previously identified. The process is repeated for different attributes building a reduced EAP, whose dimensionality is much lower with respect to the original EAP ones. Experiments carried out on the hyperspectral data set of Pavia University area show the effectiveness of the proposed method in extracting spatial features related to the physical structures presented in the scene, achieving higher classification accuracy with respect to the ones reported in the state-of-the-art literature

  16. Fault feature extraction and enhancement of rolling element bearing in varying speed condition

    NASA Astrophysics Data System (ADS)

    Ming, A. B.; Zhang, W.; Qin, Z. Y.; Chu, F. L.

    2016-08-01

    In engineering applications, the variability of load usually varies the shaft speed, which further degrades the efficacy of the diagnostic method based on the hypothesis of constant speed analysis. Therefore, the investigation of the diagnostic method suitable for the varying speed condition is significant for the bearing fault diagnosis. In this instance, a novel fault feature extraction and enhancement procedure was proposed by the combination of the iterative envelope analysis and a low pass filtering operation in this paper. At first, based on the analytical model of the collected vibration signal, the envelope signal was theoretically calculated and the iterative envelope analysis was improved for the varying speed condition. Then, a feature enhancement procedure was performed by applying a low pass filter on the temporal envelope obtained by the iterative envelope analysis. Finally, the temporal envelope signal was transformed to the angular domain by the computed order tracking and the fault feature was extracted on the squared envelope spectrum. Simulations and experiments were used to validate the efficacy of the theoretical analysis and proposed procedure. It is shown that the computed order tracking method is recommended to be applied on the envelope of the signal in order to avoid the energy spreading and amplitude distortion. Compared with the feature enhancement method performed by the fast kurtogram and corresponding optimal band pass filtering, the proposed method can efficiently extract the fault character in the varying speed condition with less amplitude attenuation. Furthermore, do not involve the center frequency estimation, the proposed method is more concise for engineering applications.

  17. Sparse representation based on local time-frequency template matching for bearing transient fault feature extraction

    NASA Astrophysics Data System (ADS)

    He, Qingbo; Ding, Xiaoxi

    2016-05-01

    The transients caused by the localized fault are important measurement information for bearing fault diagnosis. Thus it is crucial to extract the transients from the bearing vibration or acoustic signals that are always corrupted by a large amount of background noise. In this paper, an iterative transient feature extraction approach is proposed based on time-frequency (TF) domain sparse representation. The approach is realized by presenting a new method, called local TF template matching. In this method, the TF atoms are constructed based on the TF distribution (TFD) of the Morlet wavelet bases and local TF templates are formulated from the TF atoms for the matching process. The instantaneous frequency (IF) ridge calculated from the TFD of an analyzed signal provides the frequency parameter values for the TF atoms as well as an effective template matching path on the TF plane. In each iteration, local TF templates are employed to do correlation with the TFD of the analyzed signal along the IF ridge tube for identifying the optimum parameters of transient wavelet model. With this iterative procedure, transients can be extracted in the TF domain from measured signals one by one. The final signal can be synthesized by combining the extracted TF atoms and the phase of the raw signal. The local TF template matching builds an effective TF matching-based sparse representation approach with the merit of satisfying the native pulse waveform structure of transients. The effectiveness of the proposed method is verified by practical defective bearing signals. Comparison results also show that the proposed method is superior to traditional methods in transient feature extraction.

  18. Time series analysis and feature extraction techniques for structural health monitoring applications

    NASA Astrophysics Data System (ADS)

    Overbey, Lucas A.

    Recently, advances in sensing and sensing methodologies have led to the deployment of multiple sensor arrays on structures for structural health monitoring (SHM) applications. Appropriate feature extraction, detection, and classification methods based on measurements obtained from these sensor networks are vital to the SHM paradigm. This dissertation focuses on a multi-input/multi-output approach to novel data processing procedures to produce detailed information about the integrity of a structure in near real-time. The studies employ nonlinear time series analysis techniques to extract three different types of features for damage diagnostics: namely, nonlinear prediction error, transfer entropy, and the generalized interdependence. These features form reliable measures of generalized correlations between multiple measurements to capture aspects of the dynamics related to the presence of damage. Several analyses are conducted on each of these features. Specifically, variations of nonlinear prediction error are introduced, analyzed, and validated, including the use of a stochastic excitation to augment generality, introduction of local state-space models for sensitivity enhancement, and the employment of comparisons between multiple measurements for localization capability. A modification and enhancement to transfer entropy is created and validated for improved sensitivity. In addition, a thorough analysis of the effects of variability to transfer entropy estimation is made. The generalized interdependence is introduced into the literature and validated as an effective measure of damage presence, extent, and location. These features are validated on a multi-degree-of-freedom dynamic oscillator and several different frame experiments. The evaluated features are then fed into four different classification schemes to obtain a concurrent set of outputs that categorize the integrity of the structure, e.g. the presence, extent, location, and type of damage, taking

  19. Image copy-move forgery detection based on sped-up robust features descriptor and adaptive minimal-maximal suppression

    NASA Astrophysics Data System (ADS)

    Yang, Bin; Sun, Xingming; Xin, Xiangyang; Hu, Weifeng; Wu, Youxin

    2015-11-01

    Region duplication is a simple and effective operation to create digital image forgeries, where a continuous portion of pixels in an image is copied and pasted to a different location in the same image. Many prior copy-move forgery detection methods suffer from their inability to detect the duplicated region, which is subjected to various geometric transformations. A keypoint-based approach is proposed to detect the copy-move forgery in an image. Our method starts by extracting the keypoints through a fast Hessian detector. Then the adaptive minimal-maximal suppression (AMMS) strategy is developed for distributing the keypoints evenly throughout an image. By using AMMS and a sped-up robust feature descriptor, the proposed method is able to deal with the problem of insufficient keypoints in the almost uniform area. Finally, the geometric transformation performed in cloning is recovered by using the maximum likelihood estimation of the homography. Experimental results show the efficacy of this technique in detecting copy-move forgeries and estimating the geometric transformation parameters. Compared with the state of the art, our approach obtains a higher true positive rate and a lower false positive rate.

  20. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  1. An improved interactive segmentation method for extracting the edge features of femur digital radiographs

    NASA Astrophysics Data System (ADS)

    Sun, Shaobin; Zhang, Bin; Meng, Shang; Liu, Dan; Sun, Jinwei

    2012-01-01

    By comparing the advantages and disadvantages of two interactive image segmentation algorithms: level set and live wire, we propose a improved multi-step realization method of interactive image segmentation, which could help the operators to extract the important anatomical structure features from the femur digital radiographs (DR) images more accurately. Firstly, a preprocessing step including median filtering and image enhancement was made to eliminate the noise during the DR imaging; Secondly, with the advantages of level set such as simple operation and fast convergence rate, the coarse outline contour extraction was realized; Finally, with the advantages of live-wire such as repeated local operation and high precision, the fine contour extraction of special anatomic areas, the profile of fracture edge and the overlapping area was realized. So, all the interesting anatomical structure features of DR images were obtained. In this paper, our method was applied to the complete femur DR images and artificial fracture femur DR images. The segmentation result shows that our method has a good performance in accuracy and efficiency.

  2. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  3. Bispectrum feature extraction of gearbox faults based on nonnegative Tucker3 decomposition with 3D calculations

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Xu, Feiyun; Zhao, Jun'ai; Jia, Minping; Hu, Jianzhong; Huang, Peng

    2013-11-01

    Nonnegative Tucker3 decomposition(NTD) has attracted lots of attentions for its good performance in 3D data array analysis. However, further research is still necessary to solve the problems of overfitting and slow convergence under the anharmonic vibration circumstance occurred in the field of mechanical fault diagnosis. To decompose a large-scale tensor and extract available bispectrum feature, a method of conjugating Choi-Williams kernel function with Gauss-Newton Cartesian product based on nonnegative Tucker3 decomposition(NTD_EDF) is investigated. The complexity of the proposed method is reduced from o( n N lg n) in 3D spaces to o( R 1 R 2 nlg n) in 1D vectors due to its low rank form of the Tucker-product convolution. Meanwhile, a simultaneously updating algorithm is given to overcome the overfitting, slow convergence and low efficiency existing in the conventional one-by-one updating algorithm. Furthermore, the technique of spectral phase analysis for quadratic coupling estimation is used to explain the feature spectrum extracted from the gearbox fault data by the proposed method in detail. The simulated and experimental results show that the sparser and more inerratic feature distribution of basis images can be obtained with core tensor by the NTD_EDF method compared with the one by the other methods in bispectrum feature extraction, and a legible fault expression can also be performed by power spectral density(PSD) function. Besides, the deviations of successive relative error(DSRE) of NTD_EDF achieves 81.66 dB against 15.17 dB by beta-divergences based on NTD(NTD_Beta) and the time-cost of NTD_EDF is only 129.3 s, which is far less than 1 747.9 s by hierarchical alternative least square based on NTD (NTD_HALS). The NTD_EDF method proposed not only avoids the data overfitting and improves the computation efficiency but also can be used to extract more inerratic and sparser bispectrum features of the gearbox fault.

  4. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection

    SciTech Connect

    Park, Sang Hyun; Gao, Yaozong; Shi, Yinghuan; Shen, Dinggang

    2014-11-01

    Purpose: Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. Methods: The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. Results: The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to

  5. An adaptively generated feature set for low-resolution multifrequency sonar images

    NASA Astrophysics Data System (ADS)

    Arrieta, Rodolfo; Arrieta, Lisa L.; Stack, Jason R.

    2006-05-01

    Many small Unmanned Underwater Vehicles (UUVs) currently utilize inexpensive, low resolution sonars that are either mechanically or electronically steered as their main sensors. These sonars do not provide high quality images and are quite dissimilar from the broad area search sonars that will most likely be the source of the localization data given to the UUV in a reacquisition scenario. Therefore, the acoustic data returned by the UUV in its attempt to reacquire the target will look quite different from the original wide area image. The problem then becomes how to determine that the UUV is looking at the same object. Our approach is to exploit the maneuverability of the UUV and currently unused information in the echoes returned from these Commercial-Off-The-Shelf (COTS) sonars in order to classify a presumptive target as an object of interest. The approach hinges on the ability of the UUV to maneuver around the target in order to insonify the target at different frequencies of insonification, ranges, and aspects. We show how this approach would allow the UUV to extract a feature set derived from the inversion of simple physics-based models. These models predict echo time-of-arrival and inversion of these models using the echo data allows effective classification based on estimated surface and bulk material properties. We have simulated UUV maneuvers by positioning targets at different ranges and aspects to the sonar and have then interrogated the target at different frequencies. The properties that have been extracted include longitudinal, and shear speeds of the bulk, as well as longitudinal speed, Rayleigh speed, and density of the surface. The material properties we have extracted using this approach match the tabulated material values within 8%. We also show that only a few material properties are required to effectively segregate many classes of materials.

  6. Uranium extremophily is an adaptive, rather than intrinsic, feature for extremely thermoacidophilic Metallosphaera species

    PubMed Central

    Mukherjee, Arpan; Wheaton, Garrett H.; Blum, Paul H.; Kelly, Robert M.

    2012-01-01

    Thermoacidophilic archaea are found in heavy metal-rich environments, and, in some cases, these microorganisms are causative agents of metal mobilization through cellular processes related to their bioenergetics. Given the nature of their habitats, these microorganisms must deal with the potentially toxic effect of heavy metals. Here, we show that two thermoacidophilic Metallosphaera species with nearly identical (99.99%) genomes differed significantly in their sensitivity and reactivity to uranium (U). Metallosphaera prunae, isolated from a smoldering heap on a uranium mine in Thüringen, Germany, could be viewed as a “spontaneous mutant” of Metallosphaera sedula, an isolate from Pisciarelli Solfatara near Naples. Metallosphaera prunae tolerated triuranium octaoxide (U3O8) and soluble uranium [U(VI)] to a much greater extent than M. sedula. Within 15 min following exposure to “U(VI) shock,” M. sedula, and not M. prunae, exhibited transcriptomic features associated with severe stress response. Furthermore, within 15 min post-U(VI) shock, M. prunae, and not M. sedula, showed evidence of substantial degradation of cellular RNA, suggesting that transcriptional and translational processes were aborted as a dynamic mechanism for resisting U toxicity; by 60 min post-U(VI) shock, RNA integrity in M. prunae recovered, and known modes for heavy metal resistance were activated. In addition, M. sedula rapidly oxidized solid U3O8 to soluble U(VI) for bioenergetic purposes, a chemolithoautotrophic feature not previously reported. M. prunae, however, did not solubilize solid U3O8 to any significant extent, thereby not exacerbating U(VI) toxicity. These results point to uranium extremophily as an adaptive, rather than intrinsic, feature for Metallosphaera species, driven by environmental factors. PMID:23010932

  7. The tactile speed aftereffect depends on the speed of adapting motion across the skin rather than other spatiotemporal features.

    PubMed

    McIntyre, Sarah; Seizova-Cajic, Tatjana; Holcombe, Alex O

    2016-03-01

    After prolonged exposure to a surface moving across the skin, this felt movement appears slower, a phenomenon known as the tactile speed aftereffect (tSAE). We asked which feature of the adapting motion drives the tSAE: speed, the spacing between texture elements, or the frequency with which they cross the skin. After adapting to a ridged moving surface with one hand, participants compared the speed of test stimuli on adapted and unadapted hands. We used surfaces with different spatial periods (SPs; 3, 6, 12 mm) that produced adapting motion with different combinations of adapting speed (20, 40, 80 mm/s) and temporal frequency (TF; 3.4, 6.7, 13.4 ridges/s). The primary determinant of tSAE magnitude was speed of the adapting motion, not SP or TF. This suggests that adaptation occurs centrally, after speed has been computed from SP and TF, and/or that it reflects a speed cue independent of those features in the first place (e.g., indentation force). In a second experiment, we investigated the properties of the neural code for speed. Speed tuning predicts that adaptation should be greatest for speeds at or near the adapting speed. However, the tSAE was always stronger when the adapting stimulus was faster (242 mm/s) than the test (30-143 mm/s) compared with when the adapting and test speeds were matched. These results give no indication of speed tuning and instead suggest that adaptation occurs at a level where an intensive code dominates. In an intensive code, the faster the stimulus, the more the neurons fire. PMID:26631149

  8. Lumbar Ultrasound Image Feature Extraction and Classification with Support Vector Machine.

    PubMed

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2015-10-01

    Needle entry site localization remains a challenge for procedures that involve lumbar puncture, for example, epidural anesthesia. To solve the problem, we have developed an image classification algorithm that can automatically identify the bone/interspinous region for ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. The proposed algorithm consists of feature extraction, feature selection and machine learning procedures. A set of features, including matching values, positions and the appearance of black pixels within pre-defined windows along the midline, were extracted from the ultrasound images using template matching and midline detection methods. A support vector machine was then used to classify the bone images and interspinous images. The support vector machine model was trained with 1,040 images from 26 pregnant subjects and tested on 800 images from a separate set of 20 pregnant patients. A success rate of 95.0% on training set and 93.2% on test set was achieved with the proposed method. The trained support vector machine model was further tested on 46 off-line collected videos, and successfully identified the proper needle insertion site (interspinous region) in 45 of the cases. Therefore, the proposed method is able to process the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work of identifying the needle entry site.

  9. Wood Texture Features Extraction by Using GLCM Combined With Various Edge Detection Methods

    NASA Astrophysics Data System (ADS)

    Fahrurozi, A.; Madenda, S.; Ernastuti; Kerami, D.

    2016-06-01

    An image forming specific texture can be distinguished manually through the eye. However, sometimes it is difficult to do if the texture owned quite similar. Wood is a natural material that forms a unique texture. Experts can distinguish the quality of wood based texture observed in certain parts of the wood. In this study, it has been extracted texture features of the wood image that can be used to identify the characteristics of wood digitally by computer. Feature extraction carried out using Gray Level Co-occurrence Matrices (GLCM) built on an image from several edge detection methods applied to wood image. Edge detection methods used include Roberts, Sobel, Prewitt, Canny and Laplacian of Gaussian. The image of wood taken in LE2i laboratory, Universite de Bourgogne from the wood sample in France that grouped by their quality by experts and divided into four types of quality. Obtained a statistic that illustrates the distribution of texture features values of each wood type which compared according to the edge operator that is used and selection of specified GLCM parameters.

  10. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  11. Using GNG to improve 3D feature extraction--application to 6DoF egomotion.

    PubMed

    Viejo, Diego; Garcia, Jose; Cazorla, Miguel; Gil, David; Johnsson, Magnus

    2012-08-01

    Several recent works deal with 3D data in mobile robotic problems, e.g. mapping or egomotion. Data comes from any kind of sensor such as stereo vision systems, time of flight cameras or 3D lasers, providing a huge amount of unorganized 3D data. In this paper, we describe an efficient method to build complete 3D models from a Growing Neural Gas (GNG). The GNG is applied to the 3D raw data and it reduces both the subjacent error and the number of points, keeping the topology of the 3D data. The GNG output is then used in a 3D feature extraction method. We have performed a deep study in which we quantitatively show that the use of GNG improves the 3D feature extraction method. We also show that our method can be applied to any kind of 3D data. The 3D features obtained are used as input in an Iterative Closest Point (ICP)-like method to compute the 6DoF movement performed by a mobile robot. A comparison with standard ICP is performed, showing that the use of GNG improves the results. Final results of 3D mapping from the egomotion calculated are also shown. PMID:22386789

  12. Comparative assessment of feature extraction methods for visual odometry in wireless capsule endoscopy.

    PubMed

    Spyrou, Evaggelos; Iakovidis, Dimitris K; Niafas, Stavros; Koulaouzidis, Anastasios

    2015-10-01

    Wireless capsule endoscopy (WCE) enables the non-invasive examination of the gastrointestinal (GI) tract by a swallowable device equipped with a miniature camera. Accurate localization of the capsule in the GI tract enables accurate localization of abnormalities for medical interventions such as biopsy and polyp resection; therefore, the optimization of the localization outcome is important. Current approaches to endoscopic capsule localization are mainly based on external sensors and transit time estimations. Recently, we demonstrated the feasibility of capsule localization based-entirely-on visual features, without the use of external sensors. This technique relies on a motion estimation algorithm that enables measurements of the distance and the rotation of the capsule from the acquired video frames. Towards the determination of an optimal visual feature extraction technique for capsule motion estimation, an extensive comparative assessment of several state-of-the-art techniques, using a publicly available dataset, is presented. The results show that the minimization of the localization error is possible at the cost of computational efficiency. A localization error of approximately one order of magnitude higher than the minimal one can be considered as compromise for the use of current computationally efficient feature extraction techniques. PMID:26073184

  13. Feature extraction and classification of clouds in high resolution panchromatic satellite imagery

    NASA Astrophysics Data System (ADS)

    Sharghi, Elan

    The development of sophisticated remote sensing sensors is rapidly increasing, and the vast amount of satellite imagery collected is too much to be analyzed manually by a human image analyst. It has become necessary for a tool to be developed to automate the job of an image analyst. This tool would need to intelligently detect and classify objects of interest through computer vision algorithms. Existing software called the Rapid Image Exploitation Resource (RAPIER®) was designed by engineers at Space and Naval Warfare Systems Center Pacific (SSC PAC) to perform exactly this function. This software automatically searches for anomalies in the ocean and reports the detections as a possible ship object. However, if the image contains a high percentage of cloud coverage, a high number of false positives are triggered by the clouds. The focus of this thesis is to explore various feature extraction and classification methods to accurately distinguish clouds from ship objects. An examination of a texture analysis method, line detection using the Hough transform, and edge detection using wavelets are explored as possible feature extraction methods. The features are then supplied to a K-Nearest Neighbors (KNN) or Support Vector Machine (SVM) classifier. Parameter options for these classifiers are explored and the optimal parameters are determined.

  14. Weak fault feature extraction of rolling bearing based on cyclic Wiener filter and envelope spectrum

    NASA Astrophysics Data System (ADS)

    Ming, Yang; Chen, Jin; Dong, Guangming

    2011-07-01

    In vibration analysis, weak fault feature extraction under strong background noise is of great importance. A method based on cyclic Wiener filter and envelope spectrum analysis is proposed. Cyclic Wiener filter exploits the spectral coherence theory induced by the second-order cyclostationary signal. The original signal is duplicated and shifted in the frequency domain by amounts corresponding to the cyclic frequencies. The noise component is optimally filtered by a filter-bank. The filtered signal is analyzed by performing envelope spectrum. In the envelope spectrum, characteristic frequencies are quite clear. Then the most impactive part is effectively extracted for further fault diagnosis. The effectiveness of the method is demonstrated on both simulated signal and actual data from rolling bearing accelerated life test.

  15. Automated identification and geometrical features extraction of individual trees from Mobile Laser Scanning data in Budapest

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Folly-Ritvay, Zoltán; Skobrák, Ferenc; Koenig, Kristina; Höfle, Bernhard

    2016-04-01

    Mobile Laser Scanning (MLS) is an evolving operational measurement technique for urban environment providing large amounts of high resolution information about trees, street features, pole-like objects on the street sides or near to motorways. In this study we investigate a robust segmentation method to extract the individual trees automatically in order to build an object-based tree database system. We focused on the large urban parks in Budapest (Margitsziget and Városliget; KARESZ project) which contained large diversity of different kind of tree species. The MLS data contained high density point cloud data with 1-8 cm mean absolute accuracy 80-100 meter distance from streets. The robust segmentation method contained following steps: The ground points are determined first. As a second step cylinders are fitted in vertical slice 1-1.5 meter relative height above ground, which is used to determine the potential location of each single trees trunk and cylinder-like object. Finally, residual values are calculated as deviation of each point from a vertically expanded fitted cylinder; these residual values are used to separate cylinder-like object from individual trees. After successful parameterization, the model parameters and the corresponding residual values of the fitted object are extracted and imported into the tree database. Additionally, geometric features are calculated for each segmented individual tree like crown base, crown width, crown length, diameter of trunk, volume of the individual trees. In case of incompletely scanned trees, the extraction of geometric features is based on fitted circles. The result of the study is a tree database containing detailed information about urban trees, which can be a valuable dataset for ecologist, city planners, planting and mapping purposes. Furthermore, the established database will be the initial point for classification trees into single species. MLS data used in this project had been measured in the framework of

  16. LiDAR DTMs and anthropogenic feature extraction: testing the feasibility of geomorphometric parameters in floodplains

    NASA Astrophysics Data System (ADS)

    Sofia, G.; Tarolli, P.; Dalla Fontana, G.

    2012-04-01

    resolution topography have been proven to be reliable for feasible applications. The use of statistical operators as thresholds for these geomorphic parameters, furthermore, showed a high reliability for feature extraction in mountainous environments. The goal of this research is to test if these morphological indicators and objective thresholds can be feasible also in floodplains, where features assume different characteristics and other artificial disturbances might be present. In the work, three different geomorphic parameters are tested and applied at different scales on a LiDAR DTM of typical alluvial plain's area in the North East of Italy. The box-plot is applied to identify the threshold for feature extraction, and a filtering procedure is proposed, to improve the quality of the final results. The effectiveness of the different geomorphic parameters is analyzed, comparing automatically derived features with the surveyed ones. The results highlight the capability of high resolution topography, geomorphic indicators and statistical thresholds for anthropogenic features extraction and characterization in a floodplains context.

  17. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    NASA Astrophysics Data System (ADS)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  18. The research on recognition and extraction of river feature in IKNOS based on frequency domain

    NASA Astrophysics Data System (ADS)

    Wang, Ke; Feng, Xuezhi; Xiao, Pengfeng; Wu, Guoping

    2009-10-01

    Because the resolution of remotely sensed imagery becomes higher, new methods are introduced to process the high-resolution remotely sensed imagery. The algorithms introduced in this paper to recognize and extract the river features based on the frequency domain. This paper uses the Gabor filter in frequency domain to enhance the texture of river and remove the noise from remotely sensed imagery. And then according to the theory of phase congruency, this paper retrieves the PC of every point such that features such as edge of river, building and farmland in the remotely sensed imagery. Lastly, the skeletal methodology is introduced to determine the edge of river within the help of the trend of river.

  19. A mixture of physicochemical and evolutionary-based feature extraction approaches for protein fold recognition.

    PubMed

    Dehzangi, Abdollah; Sharma, Alok; Lyons, James; Paliwal, Kuldip K; Sattar, Abdul

    2015-01-01

    Recent advancement in the pattern recognition field stimulates enormous interest in Protein Fold Recognition (PFR). PFR is considered as a crucial step towards protein structure prediction and drug design. Despite all the recent achievements, the PFR still remains as an unsolved issue in biological science and its prediction accuracy still remains unsatisfactory. Furthermore, the impact of using a wide range of physicochemical-based attributes on the PFR has not been adequately explored. In this study, we propose a novel mixture of physicochemical and evolutionary-based feature extraction methods based on the concepts of segmented distribution and density. We also explore the impact of 55 different physicochemical-based attributes on the PFR. Our results show that by providing more local discriminatory information as well as obtaining benefit from both physicochemical and evolutionary-based features simultaneously, we can enhance the protein fold prediction accuracy up to 5% better than previously reported results found in the literature.

  20. Classification of underground pipe scanned images using feature extraction and neuro-fuzzy algorithm.

    PubMed

    Sinha, S K; Karray, F

    2002-01-01

    Pipeline surface defects such as holes and cracks cause major problems for utility managers, particularly when the pipeline is buried under the ground. Manual inspection for surface defects in the pipeline has a number of drawbacks, including subjectivity, varying standards, and high costs. Automatic inspection system using image processing and artificial intelligence techniques can overcome many of these disadvantages and offer utility managers an opportunity to significantly improve quality and reduce costs. A recognition and classification of pipe cracks using images analysis and neuro-fuzzy algorithm is proposed. In the preprocessing step the scanned images of pipe are analyzed and crack features are extracted. In the classification step the neuro-fuzzy algorithm is developed that employs a fuzzy membership function and error backpropagation algorithm. The idea behind the proposed approach is that the fuzzy membership function will absorb variation of feature values and the backpropagation network, with its learning ability, will show good classification efficiency.

  1. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  2. Low-Level Tie Feature Extraction of Mobile Mapping Data (mls/images) and Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Jende, P.; Hussnain, Z.; Peter, M.; Oude Elberink, S.; Gerke, M.; Vosselman, G.

    2016-03-01

    Mobile Mapping (MM) is a technique to obtain geo-information using sensors mounted on a mobile platform or vehicle. The mobile platform's position is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, building structures can obstruct a direct line-of-sight between the GNSS receiver and navigation satellites resulting in an erroneous position estimation. Therefore, derived MM data products, such as laser point clouds or images, lack the expected positioning reliability and accuracy. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary reference data. However, current approaches do not consider errors in height, cannot achieve sub-decimetre accuracy and are often not designed to work in a fully automatic fashion. We propose an automatic pipeline to rectify MM data products by employing high resolution aerial nadir and oblique imagery as horizontal and vertical reference, respectively. By exploiting the MM platform's defective, and therefore imprecise but approximate orientation parameters, accurate feature matching techniques can be realised as a pre-processing step to minimise the MM platform's three-dimensional positioning error. Subsequently, identified correspondences serve as constraints for an orientation update, which is conducted by an estimation or adjustment technique. Since not all MM systems employ laser scanners and imaging sensors simultaneously, and each system and data demands different approaches, two independent workflows are developed in parallel. Still under development, both workflows will be presented and preliminary results will be shown. The workflows comprise of three steps; feature extraction, feature matching and the orientation update. In this paper, initial results of low-level image and point cloud feature extraction methods will be discussed as well as an outline of

  3. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  4. Lamb wave feature extraction using discrete wavelet transformation and Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Ghodsi, Mojtaba; Ziaiefar, Hamidreza; Amiryan, Milad; Honarvar, Farhang; Hojjat, Yousef; Mahmoudi, Mehdi; Al-Yahmadi, Amur; Bahadur, Issam

    2016-04-01

    In this research, a new method is presented for eliciting the proper features for recognizing and classifying the kinds of the defects by guided ultrasonic waves. After applying suitable preprocessing, the suggested method extracts the base frequency band from the received signals by discrete wavelet transform and discrete Fourier transform. This frequency band can be used as a distinctive feature of ultrasonic signals in different defects. Principal Component Analysis with improving this feature and decreasing extra data managed to improve classification. In this study, ultrasonic test with A0 mode lamb wave is used and is appropriated to reduce the difficulties around the problem. The defects under analysis included corrosion, crack and local thickness reduction. The last defect is caused by electro discharge machining (EDM). The results of the classification by optimized Neural Network depicts that the presented method can differentiate different defects with 95% precision and thus, it is a strong and efficient method. Moreover, comparing the elicited features for corrosion and local thickness reduction and also the results of the two's classification clarifies that modeling the corrosion procedure by local thickness reduction which was previously common, is not an appropriate method and the signals received from the two defects are different from each other.

  5. Learning object location predictors with boosting and grammar-guided feature extraction

    SciTech Connect

    Eads, Damian Ryan; Rosten, Edward; Helmbold, David

    2009-01-01

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost which takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.

  6. Quantitative 3-D Imaging, Segmentation and Feature Extraction of the Respiratory System in Small Mammals for Computational Biophysics Simulations

    SciTech Connect

    Trease, Lynn L.; Trease, Harold E.; Fowler, John

    2007-03-15

    One of the critical steps toward performing computational biology simulations, using mesh based integration methods, is in using topologically faithful geometry derived from experimental digital image data as the basis for generating the computational meshes. Digital image data representations contain both the topology of the geometric features and experimental field data distributions. The geometric features that need to be captured from the digital image data are three-dimensional, therefore the process and tools we have developed work with volumetric image data represented as data-cubes. This allows us to take advantage of 2D curvature information during the segmentation and feature extraction process. The process is basically: 1) segmenting to isolate and enhance the contrast of the features that we wish to extract and reconstruct, 2) extracting the geometry of the features in an isosurfacing technique, and 3) building the computational mesh using the extracted feature geometry. “Quantitative” image reconstruction and feature extraction is done for the purpose of generating computational meshes, not just for producing graphics "screen" quality images. For example, the surface geometry that we extract must represent a closed water-tight surface.

  7. Adaptive training of cortical feature maps for a robot sensorimotor controller.

    PubMed

    Adams, Samantha V; Wennekers, Thomas; Denham, Sue; Culverhouse, Phil F

    2013-08-01

    This work investigates self-organising cortical feature maps (SOFMs) based upon the Kohonen Self-Organising Map (SOM) but implemented with spiking neural networks. In future work, the feature maps are intended as the basis for a sensorimotor controller for an autonomous humanoid robot. Traditional SOM methods require some modifications to be useful for autonomous robotic applications. Ideally the map training process should be self-regulating and not require predefined training files or the usual SOM parameter reduction schedules. It would also be desirable if the organised map had some flexibility to accommodate new information whilst preserving previous learnt patterns. Here methods are described which have been used to develop a cortical motor map training system which goes some way towards addressing these issues. The work is presented under the general term 'Adaptive Plasticity' and the main contribution is the development of a 'plasticity resource' (PR) which is modelled as a global parameter which expresses the rate of map development and is related directly to learning on the afferent (input) connections. The PR is used to control map training in place of a traditional learning rate parameter. In conjunction with the PR, random generation of inputs from a set of exemplar patterns is used rather than predefined datasets and enables maps to be trained without deciding in advance how much data is required. An added benefit of the PR is that, unlike a traditional learning rate, it can increase as well as decrease in response to the demands of the input and so allows the map to accommodate new information when the inputs are changed during training.

  8. Geometric and topological feature extraction of linear segments from 2D cross-section data of 3D point clouds

    NASA Astrophysics Data System (ADS)

    Ramamurthy, Rajesh; Harding, Kevin; Du, Xiaoming; Lucas, Vincent; Liao, Yi; Paul, Ratnadeep; Jia, Tao

    2015-05-01

    Optical measurement techniques are often employed to digitally capture three dimensional shapes of components. The digital data density output from these probes range from a few discrete points to exceeding millions of points in the point cloud. The point cloud taken as a whole represents a discretized measurement of the actual 3D shape of the surface of the component inspected to the measurement resolution of the sensor. Embedded within the measurement are the various features of the part that make up its overall shape. Part designers are often interested in the feature information since those relate directly to part function and to the analytical models used to develop the part design. Furthermore, tolerances are added to these dimensional features, making their extraction a requirement for the manufacturing quality plan of the product. The task of "extracting" these design features from the point cloud is a post processing task. Due to measurement repeatability and cycle time requirements often automated feature extraction from measurement data is required. The presence of non-ideal features such as high frequency optical noise and surface roughness can significantly complicate this feature extraction process. This research describes a robust process for extracting linear and arc segments from general 2D point clouds, to a prescribed tolerance. The feature extraction process generates the topology, specifically the number of linear and arc segments, and the geometry equations of the linear and arc segments automatically from the input 2D point clouds. This general feature extraction methodology has been employed as an integral part of the automated post processing algorithms of 3D data of fine features.

  9. Multiple feature extraction and classification of electroencephalograph signal for Alzheimers' with spectrum and bispectrum

    NASA Astrophysics Data System (ADS)

    Wang, Ruofan; Wang, Jiang; Li, Shunan; Yu, Haitao; Deng, Bin; Wei, Xile

    2015-01-01

    In this paper, we have combined experimental neurophysiologic recording and statistical analysis to investigate the nonlinear characteristic and the cognitive function of the brain. Spectrum and bispectrum analyses are proposed to extract multiple effective features of electroencephalograph (EEG) signals from Alzheimer's disease (AD) patients and further applied to distinguish AD patients from the normal controls. Spectral analysis based on autoregressive Burg method is first used to quantify the power distribution of EEG series in the frequency domain. Compared to the control group, the relative power spectral density of AD group is significantly higher in the theta frequency band, while lower in the alpha frequency bands. In addition, median frequency of spectrum is decreased, and spectral entropy ratio of these two frequency bands undergoes drastic changes at the P3 electrode in the central-parietal brain region, implying that the electrophysiological behavior in AD brain is much slower and less irregular. In order to explore the nonlinear high order information, bispectral analysis which measures the complexity of phase-coupling is further applied to P3 electrode in the whole frequency band. It is demonstrated that less bispectral peaks appear and the amplitudes of peaks fall, suggesting a decrease of non-Gaussianity and nonlinearity of EEG in ADs. Notably, the application of this method to five brain regions shows higher concentration of the weighted center of bispectrum and lower complexity reflecting phase-coupling by bispectral entropy. Based on spectrum and bispectrum analyses, six efficient features are extracted and then applied to discriminate AD from the normal in the five brain regions. The classification results indicate that all these features could differentiate AD patients from the normal controls with a maximum accuracy of 90.2%. Particularly, different brain regions are sensitive to different features. Moreover, the optimal combination of

  10. Research of fetal ECG extraction using wavelet analysis and adaptive filtering.

    PubMed

    Wu, Shuicai; Shen, Yanni; Zhou, Zhuhuang; Lin, Lan; Zeng, Yanjun; Gao, Xiaofeng

    2013-10-01

    Extracting clean fetal electrocardiogram (ECG) signals is very important in fetal monitoring. In this paper, we proposed a new method for fetal ECG extraction based on wavelet analysis, the least mean square (LMS) adaptive filtering algorithm, and the spatially selective noise filtration (SSNF) algorithm. First, abdominal signals and thoracic signals were processed by stationary wavelet transform (SWT), and the wavelet coefficients at each scale were obtained. For each scale, the detail coefficients were processed by the LMS algorithm. The coefficient of the abdominal signal was taken as the original input of the LMS adaptive filtering system, and the coefficient of the thoracic signal as the reference input. Then, correlations of the processed wavelet coefficients were computed. The threshold was set and noise components were removed with the SSNF algorithm. Finally, the processed wavelet coefficients were reconstructed by inverse SWT to obtain fetal ECG. Twenty cases of simulated data and 12 cases of clinical data were used. Experimental results showed that the proposed method outperforms the LMS algorithm: (1) it shows improvement in case of superposition R-peaks of fetal ECG and maternal ECG; (2) noise disturbance is eliminated by incorporating the SSNF algorithm and the extracted waveform is more stable; and (3) the performance is proven quantitatively by SNR calculation. The results indicated that the proposed algorithm can be used for extracting fetal ECG from abdominal signals.

  11. Urban Area Extent Extraction in Spaceborne HR and VHR Data Using Multi-Resolution Features

    PubMed Central

    Iannelli, Gianni Cristian; Lisini, Gianni; Dell'Acqua, Fabio; Feitosa, Raul Queiroz; da Costa, Gilson Alexandre Ostwald Pedro; Gamba, Paolo

    2014-01-01

    Detection of urban area extents by means of remotely sensed data is a difficult task, especially because of the multiple, diverse definitions of what an “urban area” is. The models of urban areas listed in technical literature are based on the combination of spectral information with spatial patterns, possibly at different spatial resolutions. Starting from the same data set, “urban area” extraction may thus lead to multiple outputs. If this is done in a well-structured framework, however, this may be considered as an advantage rather than an issue. This paper proposes a novel framework for urban area extent extraction from multispectral Earth Observation (EO) data. The key is to compute and combine spectral and multi-scale spatial features. By selecting the most adequate features, and combining them with proper logical rules, the approach allows matching multiple urban area models. Experimental results for different locations in Brazil and Kenya using High-Resolution (HR) data prove the usefulness and flexibility of the framework. PMID:25271564

  12. Fault feature extraction of rolling bearing based on an improved cyclical spectrum density method

    NASA Astrophysics Data System (ADS)

    Li, Min; Yang, Jianhong; Wang, Xiaojing

    2015-11-01

    The traditional cyclical spectrum density(CSD) method is widely used to analyze the fault signals of rolling bearing. All modulation frequencies are demodulated in the cyclic frequency spectrum. Consequently, recognizing bearing fault type is difficult. Therefore, a new CSD method based on kurtosis(CSDK) is proposed. The kurtosis value of each cyclic frequency is used to measure the modulation capability of cyclic frequency. When the kurtosis value is large, the modulation capability is strong. Thus, the kurtosis value is regarded as the weight coefficient to accumulate all cyclic frequencies to extract fault features. Compared with the traditional method, CSDK can reduce the interference of harmonic frequency in fault frequency, which makes fault characteristics distinct from background noise. To validate the effectiveness of the method, experiments are performed on the simulation signal, the fault signal of the bearing outer race in the test bed, and the signal gathered from the bearing of the blast furnace belt cylinder. Experimental results show that the CSDK is better than the resonance demodulation method and the CSD in extracting fault features and recognizing degradation trends. The proposed method provides a new solution to fault diagnosis in bearings.

  13. A comparison of feature extraction methods for Sentinel-1 images: Gabor and Weber transforms

    NASA Astrophysics Data System (ADS)

    Stan, Mihaela; Popescu, Anca; Stoichescu, Dan Alexandru

    2015-10-01

    The purpose of this paper is to compare the performance of two feature extraction methods when applied on high resolution Synthetic Aperture Radar (SAR) images acquired with the new ESA mission SENTINEL-1 (S-1). The feature extraction methods were previously tested on high and very high resolution SAR data (imaged by TerraSAR-X) and had a good performance in discriminating between a relevant numbers of land cover classes (tens of classes). Based on the available spatial resolution (10x10m) of S-1 Interferometric Wide (IW) Ground Range Detected (GRD) images the number of detectable classes is much lower. Moreover, the overall heterogeneity of the images is much lower as compared to the high resolution data, the number of observable details is smaller, and this favors the choice of a smaller window size for the analysis: between 10 and 50 pixels in range and azimuth. The size of the analysis window ensures the consistency with the previous results reported in the literature in very high resolution data (as the size on the ground is comparable and thus the number of contributing objects in the window is similar). The performance of Gabor filters and the Weber Local Descriptor (WLD) was investigated in a twofold approach: first the descriptors were computed directly over the IW GRD images and secondly on the sub-sampled version of the same data (in order to determine the effect of the speckle correlation on the overall class detection probability).

  14. Feature extraction and classification for ultrasound images of lumbar spine with support vector machine.

    PubMed

    Yu, Shuang; Tan, Kok Kiong; Sng, Ban Leong; Li, Shengjin; Sia, Alex Tiong Heng

    2014-01-01

    In this paper, we proposed a feature extraction and machine learning method for the classification of ultrasound images obtained from lumbar spine of pregnant patients in the transverse plane. A group of features, including matching values and positions, appearance of black pixels within predefined windows along the midline, are extracted from the ultrasound images using template matching and midline detection. Support vector machine (SVM) with Gaussian kernel is utilized to classify the bone images and interspinous images with optimal separation hyperplane. The SVM is trained with 800 images from 20 pregnant subjects and tested with 640 images from a separate set of 16 pregnant patients. A high success rate (97.25% on training set and 95.00% on test set) is achieved with the proposed method. The trained SVM model is further tested on 36 videos collected from 36 pregnant subjects and successfully identified the proper needle insertion site (interspinous region) on all of the cases. Therefore, the proposed method is able to identify the ultrasound images of lumbar spine in an automatic manner, so as to facilitate the anesthetists' work to identify the needle insertion point precisely and effectively.

  15. Feature selection for the identification of antitumor compounds in the alcohol total extracts of Curcuma longa.

    PubMed

    Jiang, Jian-Lan; Li, Zi-Dan; Zhang, Huan; Li, Yan; Zhang, Xiao-Hang; Yuan, Yi-fu; Yuan, Ying-jin

    2014-08-01

    Antitumor activity has been reported for turmeric, the dried rhizome of Curcuma longa. This study proposes a new feature selection method for the identification of the antitumor compounds in turmeric total extracts. The chemical composition of turmeric total extracts was analyzed by gas chromatography-mass spectrometry (21 ingredients) and high-performance liquid chromatography-mass spectrometry (22 ingredients), and their cytotoxicity was detected through an 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay against HeLa cells. A support vector machine for regression and a generalized regression neural network were used to research the composition-activity relationship and were later combined with the mean impact value to identify the antitumor compounds. The results showed that six volatile constituents (three terpenes and three ketones) and seven nonvolatile constituents (five curcuminoids and two unknown ingredients) with high absolute mean impact values exhibited a significant correlation with the cytotoxicity against HeLa cells. With the exception of the two unknown ingredients, the identified 11 constituents have been reported to exhibit cytotoxicity. This finding indicates that the feature selection method may be a supplementary tool for the identification of active compounds from herbs.

  16. Feature extraction and object recognition in multi-modal forward looking imagery

    NASA Astrophysics Data System (ADS)

    Greenwood, G.; Blakely, S.; Schartman, D.; Calhoun, B.; Keller, J. M.; Ton, T.; Wong, D.; Soumekh, M.

    2010-04-01

    The U. S. Army Night Vision and Electronic Sensors Directorate (NVESD) recently tested an explosive-hazards detection vehicle that combines a pulsed FLGPR with a visible-spectrum color camera. Additionally, NVESD tested a human-in-the-loop multi-camera system with the same goal in mind. It contains wide field-of-view color and infrared cameras as well as zoomable narrow field-of-view versions of those modalities. Even though they are separate vehicles, having information from both systems offers great potential for information fusion. Based on previous work at the University of Missouri, we are not only able to register the UTM-based positions of the FLGPR to the color image sequences on the first system, but we can register these locations to corresponding image frames of all sensors on the human-in-the-loop platform. This paper presents our approach to first generate libraries of multi-sensor information across these platforms. Subsequently, research is performed in feature extraction and recognition algorithms based on the multi-sensor signatures. Our goal is to tailor specific algorithms to recognize and eliminate different categories of clutter and to be able to identify particular explosive hazards. We demonstrate our library creation, feature extraction and object recognition results on a large data collection at a US Army test site.

  17. Nonlinear and nonstationary framework for feature extraction and classification of motor imagery.

    PubMed

    Trad, Dalila; Al-ani, Tarik; Monacelli, Eric; Jemni, Mohamed

    2011-01-01

    In this work we investigate a nonlinear approach for feature extraction of Electroencephalogram (EEG) signals in order to classify motor imagery for Brain Computer Interface (BCI). This approach is based on the Empirical Mode Decomposition (EMD) and band power (BP). The EMD method is a data-driven technique to analyze non-stationary and nonlinear signals. It generates a set of stationary time series called Intrinsic Mode Functions (IMF) to represent the original data. These IMFs are analyzed with the power spectral density (PSD) to study the active frequency range correspond to the motor imagery for each subject. Then, the band power is computed within a certain frequency range in the channels. Finally, the data is reconstructed with only the specific IMFs and then the band power is employed on the new database. The classification of motor imagery was performed by using two classifiers, Linear Discriminant Analysis (LDA) and Hidden Markov Models (HMMs). The results obtained show that the EMD method allows the most reliable features to be extracted from EEG and that the classification rate obtained is higher and better than using only the direct BP approach.

  18. A Feature Extraction Method for Vibration Signal of Bearing Incipient Degradation

    NASA Astrophysics Data System (ADS)

    Huang, Haifeng; Ouyang, Huajiang; Gao, Hongli; Guo, Liang; Li, Dan; Wen, Juan

    2016-06-01

    Detection of incipient degradation demands extracting sensitive features accurately when signal-to-noise ratio (SNR) is very poor, which appears in most industrial environments. Vibration signals of rolling bearings are widely used for bearing fault diagnosis. In this paper, we propose a feature extraction method that combines Blind Source Separation (BSS) and Spectral Kurtosis (SK) to separate independent noise sources. Normal, and incipient fault signals from vibration tests of rolling bearings are processed. We studied 16 groups of vibration signals (which all display an increase in kurtosis) of incipient degradation after they are processed by a BSS filter. Compared with conventional kurtosis, theoretical studies of SK trends show that the SK levels vary with frequencies and some experimental studies show that SK trends of measured vibration signals of bearings vary with the amount and level of impulses in both vibration and noise signals due to bearing faults. It is found that the peak values of SK increase when vibration signals of incipient faults are processed by a BSS filter. This pre-processing by a BSS filter makes SK more sensitive to impulses caused by performance degradation of bearings.

  19. A wavelet transform based feature extraction and classification of cardiac disorder.

    PubMed

    Sumathi, S; Beaulah, H Lilly; Vanithamani, R

    2014-09-01

    This paper approaches an intellectual diagnosis system using hybrid approach of Adaptive Neuro-Fuzzy Inference System (ANFIS) model for classification of Electrocardiogram (ECG) signals. This method is based on using Symlet Wavelet Transform for analyzing the ECG signals and extracting the parameters related to dangerous cardiac arrhythmias. In these particular parameters were used as input of ANFIS classifier, five most important types of ECG signals they are Normal Sinus Rhythm (NSR), Atrial Fibrillation (AF), Pre-Ventricular Contraction (PVC), Ventricular Fibrillation (VF), and Ventricular Flutter (VFLU) Myocardial Ischemia. The inclusion of ANFIS in the complex investigating algorithms yields very interesting recognition and classification capabilities across a broad spectrum of biomedical engineering. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies. The results give importance to that the proposed ANFIS model illustrates potential advantage in classifying the ECG signals. The classification accuracy of 98.24 % is achieved. PMID:25023652

  20. Computation identifies structural features that govern neuronal firing properties in slowly adapting touch receptors

    PubMed Central

    Lesniak, Daine R; Marshall, Kara L; Wellnitz, Scott A; Jenkins, Blair A; Baba, Yoshichika; Rasband, Matthew N; Gerling, Gregory J; Lumpkin, Ellen A

    2014-01-01

    Touch is encoded by cutaneous sensory neurons with diverse morphologies and physiological outputs. How neuronal architecture influences response properties is unknown. To elucidate the origin of firing patterns in branched mechanoreceptors, we combined neuroanatomy, electrophysiology and computation to analyze mouse slowly adapting type I (SAI) afferents. These vertebrate touch receptors, which innervate Merkel cells, encode shape and texture. SAI afferents displayed a high degree of variability in touch-evoked firing and peripheral anatomy. The functional consequence of differences in anatomical architecture was tested by constructing network models representing sequential steps of mechanosensory encoding: skin displacement at touch receptors, mechanotransduction and action-potential initiation. A systematic survey of arbor configurations predicted that the arrangement of mechanotransduction sites at heminodes is a key structural feature that accounts in part for an afferent’s firing properties. These findings identify an anatomical correlate and plausible mechanism to explain the driver effect first described by Adrian and Zotterman. DOI: http://dx.doi.org/10.7554/eLife.01488.001 PMID:24448409

  1. Adaptive Morphological Feature-Based Object Classifier for a Color Imaging System

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2009-01-01

    Utilizing a Compact Color Microscope Imaging System (CCMIS), a unique algorithm has been developed that combines human intelligence along with machine vision techniques to produce an autonomous microscope tool for biomedical, industrial, and space applications. This technique is based on an adaptive, morphological, feature-based mapping function comprising 24 mutually inclusive feature metrics that are used to determine the metrics for complex cell/objects derived from color image analysis. Some of the features include: Area (total numbers of non-background pixels inside and including the perimeter), Bounding Box (smallest rectangle that bounds and object), centerX (x-coordinate of intensity-weighted, center-of-mass of an entire object or multi-object blob), centerY (y-coordinate of intensity-weighted, center-of-mass, of an entire object or multi-object blob), Circumference (a measure of circumference that takes into account whether neighboring pixels are diagonal, which is a longer distance than horizontally or vertically joined pixels), . Elongation (measure of particle elongation given as a number between 0 and 1. If equal to 1, the particle bounding box is square. As the elongation decreases from 1, the particle becomes more elongated), . Ext_vector (extremal vector), . Major Axis (the length of a major axis of a smallest ellipse encompassing an object), . Minor Axis (the length of a minor axis of a smallest ellipse encompassing an object), . Partial (indicates if the particle extends beyond the field of view), . Perimeter Points (points that make up a particle perimeter), . Roundness [(4(pi) x area)/perimeter(squared)) the result is a measure of object roundness, or compactness, given as a value between 0 and 1. The greater the ratio, the rounder the object.], . Thin in center (determines if an object becomes thin in the center, (figure-eight-shaped), . Theta (orientation of the major axis), . Smoothness and color metrics for each component (red, green, blue

  2. Improving the accuracy of feature extraction for flexible endoscope calibration by spatial super resolution.

    PubMed

    Rupp, Stephan; Elter, Matthias; Winter, Christian

    2007-01-01

    Many applications in the domain of medical as well as industrial image processing make considerable use of flexible endoscopes - so called fiberscopes - to gain visual access to holes, hollows, antrums and cavities that are difficult to enter and examine. For a complete exploration and understanding of an antrum, 3d depth information might be desirable or yet necessary. This often requires the mapping of 3d world coordinates to 2d image coordinates which is estimated by camera calibration. In order to retrieve useful results, the precise extraction of the imaged calibration pattern's markers plays a decisive role in the camera calibration process. Unfortunately, when utilizing fiberscopes, the image conductor introduces a disturbing comb structure to the images that anticipates a (precise) marker extraction. Since the calibration quality crucially depends on subpixel-precise calibration marker positions, we apply static comb structure removal algorithms along with a dynamic spatial resolution enhancement method in order to improve the feature extraction accuracy. In our experiments, we demonstrate that our approach results in a more accurate calibration of flexible endoscopes and thus allows for a more precise reconstruction of 3d information from fiberoptic images. PMID:18003530

  3. Detailed Hydrographic Feature Extraction from High-Resolution LiDAR Data

    SciTech Connect

    Danny L. Anderson

    2012-05-01

    Detailed hydrographic feature extraction from high-resolution light detection and ranging (LiDAR) data is investigated. Methods for quantitatively evaluating and comparing such extractions are presented, including the use of sinuosity and longitudinal root-mean-square-error (LRMSE). These metrics are then used to quantitatively compare stream networks in two studies. The first study examines the effect of raster cell size on watershed boundaries and stream networks delineated from LiDAR-derived digital elevation models (DEMs). The study confirmed that, with the greatly increased resolution of LiDAR data, smaller cell sizes generally yielded better stream network delineations, based on sinuosity and LRMSE. The second study demonstrates a new method of delineating a stream directly from LiDAR point clouds, without the intermediate step of deriving a DEM. Direct use of LiDAR point clouds could improve efficiency and accuracy of hydrographic feature extractions. The direct delineation method developed herein and termed “mDn”, is an extension of the D8 method that has been used for several decades with gridded raster data. The method divides the region around a starting point into sectors, using the LiDAR data points within each sector to determine an average slope, and selecting the sector with the greatest downward slope to determine the direction of flow. An mDn delineation was compared with a traditional grid-based delineation, using TauDEM, and other readily available, common stream data sets. Although, the TauDEM delineation yielded a sinuosity that more closely matches the reference, the mDn delineation yielded a sinuosity that was higher than either the TauDEM method or the existing published stream delineations. Furthermore, stream delineation using the mDn method yielded the smallest LRMSE.

  4. Antepartum fetal heart rate feature extraction and classification using empirical mode decomposition and support vector machine

    PubMed Central

    2011-01-01

    Background Cardiotocography (CTG) is the most widely used tool for fetal surveillance. The visual analysis of fetal heart rate (FHR) traces largely depends on the expertise and experience of the clinician involved. Several approaches have been proposed for the effective interpretation of FHR. In this paper, a new approach for FHR feature extraction based on empirical mode decomposition (EMD) is proposed, which was used along with support vector machine (SVM) for the classification of FHR recordings as 'normal' or 'at risk'. Methods The FHR were recorded from 15 subjects at a sampling rate of 4 Hz and a dataset consisting of 90 randomly selected records of 20 minutes duration was formed from these. All records were labelled as 'normal' or 'at risk' by two experienced obstetricians. A training set was formed by 60 records, the remaining 30 left as the testing set. The standard deviations of the EMD components are input as features to a support vector machine (SVM) to classify FHR samples. Results For the training set, a five-fold cross validation test resulted in an accuracy of 86% whereas the overall geometric mean of sensitivity and specificity was 94.8%. The Kappa value for the training set was .923. Application of the proposed method to the testing set (30 records) resulted in a geometric mean of 81.5%. The Kappa value for the testing set was .684. Conclusions Based on the overall performance of the system it can be stated that the proposed methodology is a promising new approach for the feature extraction and classification of FHR signals. PMID:21244712

  5. Protein sequences classification by means of feature extraction with substitution matrices

    PubMed Central

    2010-01-01

    Background This paper deals with the preprocessing of protein sequences for supervised classification. Motif extraction is one way to address that task. It has been largely used to encode biological sequences into feature vectors to enable using well-known machine-learning classifiers which require this format. However, designing a suitable feature space, for a set of proteins, is not a trivial task. For this purpose, we propose a novel encoding method that uses amino-acid substitution matrices to define similarity between motifs during the extraction step. Results In order to demonstrate the efficiency of such approach, we compare several encoding methods using some machine learning classifiers. The experimental results showed that our encoding method outperforms other ones in terms of classification accuracy and number of generated attributes. We also compared the classifiers in term of accuracy. Results indicated that SVM generally outperforms the other classifiers with any encoding method. We showed that SVM, coupled with our encoding method, can be an efficient protein classification system. In addition, we studied the effect of the substitution matrices variation on the quality of our method and hence on the classification quality. We noticed that our method enables good classification accuracies with all the substitution matrices and that the variances of the obtained accuracies using various substitution matrices are slight. However, the number of generated features varies from a substitution matrix to another. Furthermore, the use of already published datasets allowed us to carry out a comparison with several related works. Conclusions The outcomes of our comparative experiments confirm the efficiency of our encoding method to represent protein sequences in classification tasks. PMID:20377887

  6. Affective Video Retrieval: Violence Detection in Hollywood Movies by Large-Scale Segmental Feature Extraction

    PubMed Central

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology “out of the lab” to real-world, diverse data. In this contribution, we address the problem of finding “disturbing” scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis. PMID:24391704

  7. Entropy-based adaptive nuclear texture features are independent prognostic markers in a total population of uterine sarcomas.

    PubMed

    Nielsen, Birgitte; Hveem, Tarjei Sveinsgjerd; Kildal, Wanja; Abeler, Vera M; Kristensen, Gunnar B; Albregtsen, Fritz; Danielsen, Håvard E

    2015-04-01

    Nuclear texture analysis measures the spatial arrangement of the pixel gray levels in a digitized microscopic nuclear image and is a promising quantitative tool for prognosis of cancer. The aim of this study was to evaluate the prognostic value of entropy-based adaptive nuclear texture features in a total population of 354 uterine sarcomas. Isolated nuclei (monolayers) were prepared from 50 µm tissue sections and stained with Feulgen-Schiff. Local gray level entropy was measured within small windows of each nuclear image and stored in gray level entropy matrices, and two superior adaptive texture features were calculated from each matrix. The 5-year crude survival was significantly higher (P < 0.001) for patients with high texture feature values (72%) than for patients with low feature values (36%). When combining DNA ploidy classification (diploid/nondiploid) and texture (high/low feature value), the patients could be stratified into three risk groups with 5-year crude survival of 77, 57, and 34% (Hazard Ratios (HR) of 1, 2.3, and 4.1, P < 0.001). Entropy-based adaptive nuclear texture was an independent prognostic marker for crude survival in multivariate analysis including relevant clinicopathological features (HR = 2.1, P = 0.001), and should therefore be considered as a potential prognostic marker in uterine sarcomas.

  8. Probability-based diagnostic imaging using hybrid features extracted from ultrasonic Lamb wave signals

    NASA Astrophysics Data System (ADS)

    Zhou, Chao; Su, Zhongqing; Cheng, Li

    2011-12-01

    The imaging technique based on guided waves has been a research focus in the field of damage detection over the years, aimed at intuitively highlighting structural damage in two- or three-dimensional images. The accuracy and efficiency of this technique substantially rely on the means of defining the field values at image pixels. In this study, a novel probability-based diagnostic imaging (PDI) approach was developed. Hybrid signal features (including temporal information, intensity of signal energy and signal correlation) were extracted from ultrasonic Lamb wave signals and integrated to retrofit the traditional way of defining field values. To acquire hybrid signal features, an active sensor network in line with pulse-echo and pitch-catch configurations was designed, supplemented with a novel concept of 'virtual sensing'. A hybrid image fusion scheme was developed to enhance the tolerance of the approach to measurement noise/uncertainties and erroneous perceptions from individual sensors. As applications, the approach was employed to identify representative damage scenarios including L-shape through-thickness crack (orientation-specific damage), polygonal damage (multi-edge damage) and multi-damage in structural plates. Results have corroborated that the developed PDI approach based on the use of hybrid signal features is capable of visualizing structural damage quantitatively, regardless of damage shape and number, by highlighting its individual edges in an easily interpretable binary image.

  9. Cerebral Glioma Grading Using Bayesian Network with Features Extracted from Multiple Modalities of Magnetic Resonance Imaging

    PubMed Central

    Wang, Huiting; Liu, Renyuan; Zhang, Xin; Li, Ming; Yang, Yongbo; Yan, Jing; Niu, Fengnan; Tian, Chuanshuai; Wang, Kun; Yu, Haiping; Chen, Weibo; Wan, Suiren; Sun, Yu; Zhang, Bing

    2016-01-01

    Many modalities of magnetic resonance imaging (MRI) have been confirmed to be of great diagnostic value in glioma grading. Contrast enhanced T1-weighted imaging allows the recognition of blood-brain barrier breakdown. Perfusion weighted imaging and MR spectroscopic imaging enable the quantitative measurement of perfusion parameters and metabolic alterations respectively. These modalities can potentially improve the grading process in glioma if combined properly. In this study, Bayesian Network, which is a powerful and flexible method for probabilistic analysis under uncertainty, is used to combine features extracted from contrast enhanced T1-weighted imaging, perfusion weighted imaging and MR spectroscopic imaging. The networks were constructed using K2 algorithm along with manual determination and distribution parameters learned using maximum likelihood estimation. The grading performance was evaluated in a leave-one-out analysis, achieving an overall grading accuracy of 92.86% and an area under the curve of 0.9577 in the receiver operating characteristic analysis given all available features observed in the total 56 patients. Results and discussions show that Bayesian Network is promising in combining features from multiple modalities of MRI for improved grading performance. PMID:27077923

  10. Cerebral Glioma Grading Using Bayesian Network with Features Extracted from Multiple Modalities of Magnetic Resonance Imaging.

    PubMed

    Hu, Jisu; Wu, Wenbo; Zhu, Bin; Wang, Huiting; Liu, Renyuan; Zhang, Xin; Li, Ming; Yang, Yongbo; Yan, Jing; Niu, Fengnan; Tian, Chuanshuai; Wang, Kun; Yu, Haiping; Chen, Weibo; Wan, Suiren; Sun, Yu; Zhang, Bing

    2016-01-01

    Many modalities of magnetic resonance imaging (MRI) have been confirmed to be of great diagnostic value in glioma grading. Contrast enhanced T1-weighted imaging allows the recognition of blood-brain barrier breakdown. Perfusion weighted imaging and MR spectroscopic imaging enable the quantitative measurement of perfusion parameters and metabolic alterations respectively. These modalities can potentially improve the grading process in glioma if combined properly. In this study, Bayesian Network, which is a powerful and flexible method for probabilistic analysis under uncertainty, is used to combine features extracted from contrast enhanced T1-weighted imaging, perfusion weighted imaging and MR spectroscopic imaging. The networks were constructed using K2 algorithm along with manual determination and distribution parameters learned using maximum likelihood estimation. The grading performance was evaluated in a leave-one-out analysis, achieving an overall grading accuracy of 92.86% and an area under the curve of 0.9577 in the receiver operating characteristic analysis given all available features observed in the total 56 patients. Results and discussions show that Bayesian Network is promising in combining features from multiple modalities of MRI for improved grading performance.

  11. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 1: time domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    The paper presents an application of the gamma-absorption method to study a gas-liquid two-phase flow in a horizontal pipeline. In the tests on laboratory installation two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals have been used. The experimental set-up allows recording of stochastic signals, which describe instantaneous content of the stream in the particular cross-section of the flow mixture. The analyses of these signals by statistical methods allow to determine the mean velocity of the gas phase. Meanwhile, the selected features of signals provided by the absorption set, can be applied to recognition of the structure of the flow. In this work such three structures of air-water flow as: plug, bubble, and transitional plug - bubble one were considered. The recorded raw signals were analyzed in time domain and several features were extracted. It was found that following features of signals as the mean, standard deviation, root mean square (RMS), variance and 4th moment are most useful to recognize the structure of the flow.

  12. Feature extraction for ultrasonic sensor based defect detection in ceramic components

    NASA Astrophysics Data System (ADS)

    Kesharaju, Manasa; Nagarajah, Romesh

    2014-02-01

    High density silicon carbide materials are commonly used as the ceramic element of hard armour inserts used in traditional body armour systems to reduce their weight, while providing improved hardness, strength and elastic response to stress. Currently, armour ceramic tiles are inspected visually offline using an X-ray technique that is time consuming and very expensive. In addition, from X-rays multiple defects are also misinterpreted as single defects. Therefore, to address these problems the ultrasonic non-destructive approach is being investigated. Ultrasound based inspection would be far more cost effective and reliable as the methodology is applicable for on-line quality control including implementation of accept/reject criteria. This paper describes a recently developed methodology to detect, locate and classify various manufacturing defects in ceramic tiles using sub band coding of ultrasonic test signals. The wavelet transform is applied to the ultrasonic signal and wavelet coefficients in the different frequency bands are extracted and used as input features to an artificial neural network (ANN) for purposes of signal classification. Two different classifiers, using artificial neural networks (supervised) and clustering (un-supervised) are supplied with features selected using Principal Component Analysis(PCA) and their classification performance compared. This investigation establishes experimentally that Principal Component Analysis(PCA) can be effectively used as a feature selection method that provides superior results for classifying various defects in the context of ultrasonic inspection in comparison with the X-ray technique.

  13. A Distinguishing Arterial Pulse Waves Approach by Using Image Processing and Feature Extraction Technique.

    PubMed

    Chen, Hsing-Chung; Kuo, Shyi-Shiun; Sun, Shen-Ching; Chang, Chia-Hui

    2016-10-01

    Traditional Chinese Medicine (TCM) is based on five main types of diagnoses methods consisting of inspection, auscultation, olfaction, inquiry, and palpation. The most important one is palpation also called pulse diagnosis which is to measure wrist artery pulse by doctor's fingers for detecting patient's health state. In this paper, it is carried out by using a specialized pulse measuring instrument to classify one's pulse type. The measured pulse waves (MPWs) were segmented into the arterial pulse wave curve (APWC) by image proposing method. The slopes and periods among four specific points on the APWC were taken to be the pulse features. Three algorithms are proposed in this paper, which could extract these features from the APWCs and compared their differences between each of them to the average feature matrix, individually. These results show that the method proposed in this study is superior and more accurate than the previous studies. The proposed method could significantly save doctors a large amount of time, increase accuracy and decrease data volume. PMID:27562483

  14. The research of edge extraction and target recognition based on inherent feature of objects

    NASA Astrophysics Data System (ADS)

    Xie, Yu-chan; Lin, Yu-chi; Huang, Yin-guo

    2008-03-01

    Current research on computer vision often needs specific techniques for particular problems. Little use has been made of high-level aspects of computer vision, such as three-dimensional (3D) object recognition, that are appropriate for large classes of problems and situations. In particular, high-level vision often focuses mainly on the extraction of symbolic descriptions, and pays little attention to the speed of processing. In order to extract and recognize target intelligently and rapidly, in this paper we developed a new 3D target recognition method based on inherent feature of objects in which cuboid was taken as model. On the basis of analysis cuboid nature contour and greyhound distributing characteristics, overall fuzzy evaluating technique was utilized to recognize and segment the target. Then Hough transform was used to extract and match model's main edges, we reconstruct aim edges by stereo technology in the end. There are three major contributions in this paper. Firstly, the corresponding relations between the parameters of cuboid model's straight edges lines in an image field and in the transform field were summed up. By those, the aimless computations and searches in Hough transform processing can be reduced greatly and the efficiency is improved. Secondly, as the priori knowledge about cuboids contour's geometry character known already, the intersections of the component extracted edges are taken, and assess the geometry of candidate edges matches based on the intersections, rather than the extracted edges. Therefore the outlines are enhanced and the noise is depressed. Finally, a 3-D target recognition method is proposed. Compared with other recognition methods, this new method has a quick response time and can be achieved with high-level computer vision. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in object tracking, port AGV, robots

  15. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  16. Extraction of Airport Features from High Resolution Satellite Imagery for Design and Risk Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, Chris; Qiu, You-Liang; Jensen, John R.; Schill, Steven R.; Floyd, Mike

    2001-01-01

    The LPA Group, consisting of 17 offices located throughout the eastern and central United States is an architectural, engineering and planning firm specializing in the development of Airports, Roads and Bridges. The primary focus of this ARC project is concerned with assisting their aviation specialists who work in the areas of Airport Planning, Airfield Design, Landside Design, Terminal Building Planning and design, and various other construction services. The LPA Group wanted to test the utility of high-resolution commercial satellite imagery for the purpose of extracting airport elevation features in the glide path areas surrounding the Columbia Metropolitan Airport. By incorporating remote sensing techniques into their airport planning process, LPA wanted to investigate whether or not it is possible to save time and money while achieving the equivalent accuracy as traditional planning methods. The Affiliate Research Center (ARC) at the University of South Carolina investigated the use of remotely sensed imagery for the extraction of feature elevations in the glide path zone. A stereo pair of IKONOS panchromatic satellite images, which has a spatial resolution of 1 x 1 m, was used to determine elevations of aviation obstructions such as buildings, trees, towers and fence-lines. A validation dataset was provided by the LPA Group to assess the accuracy of the measurements derived from the IKONOS imagery. The initial goal of this project was to test the utility of IKONOS imagery in feature extraction using ERDAS Stereo Analyst. This goal was never achieved due to problems with ERDAS software support of the IKONOS sensor model and the unavailability of imperative sensor model information from Space Imaging. The obstacles encountered in this project pertaining to ERDAS Stereo Analyst and IKONOS imagery will be reviewed in more detail later in this report. As a result of the technical difficulties with Stereo Analyst, ERDAS OrthoBASE was used to derive aviation

  17. Scale parameter-estimating method for adaptive fingerprint pore extraction model

    NASA Astrophysics Data System (ADS)

    Yi, Yao; Cao, Liangcai; Guo, Wei; Luo, Yaping; He, Qingsheng; Jin, Guofan

    2011-11-01

    Sweat pores and other level 3 features have been proven to provide more discriminatory information about fingerprint characteristics, which is useful for personal identification especially in law enforcement applications. With the advent of high resolution (>=1000 ppi) fingerprint scanning equipment, sweat pores are attracting increasing attention in automatic fingerprint identification system (AFIS), where the extraction of pores is a critical step. This paper presents a scale parameter-estimating method in filtering-based pore extraction procedure. Pores are manually extracted from a 1000 ppi grey-level fingerprint image. The size and orientation of each detected pore are extracted together with local ridge width and orientation. The quantitative relation between the pore parameters (size and orientation) and local image parameters (ridge width and orientation) is statistically obtained. The pores are extracted by filtering fingerprint image with the new pore model, whose parameters are determined by local image parameters and the statistically established relation. Experiments conducted on high resolution fingerprints indicate that the new pore model gives good performance in pore extraction.

  18. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method. PMID:25227014

  19. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  20. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  1. Adaptation to second order stimulus features by electrosensory neurons causes ambiguity

    PubMed Central

    Zhang, Zhubo D.; Chacron, Maurice J.

    2016-01-01

    Understanding the coding strategies used to process sensory input remains a central problem in neuroscience. Growing evidence suggests that sensory systems process natural stimuli efficiently by ensuring a close match between neural tuning and stimulus statistics through adaptation. However, adaptation causes ambiguity as the same response can be elicited by different stimuli. The mechanisms by which the brain resolves ambiguity remain poorly understood. Here we investigated adaptation in electrosensory pyramidal neurons within different parallel maps in the weakly electric fish Apteronotus leptorhynchus. In response to step increases in stimulus variance, we found that pyramidal neurons within the lateral segment (LS) displayed strong scale invariant adaptation whereas those within the centromedial segment (CMS) instead displayed weaker degrees of scale invariant adaptation. Signal detection analysis revealed that strong adaptation in LS neurons significantly reduced stimulus discriminability. In contrast, weaker adaptation displayed by CMS neurons led to significantly lesser impairment of discriminability. Thus, while LS neurons display adaptation that is matched to natural scene statistics, thereby optimizing information transmission, CMS neurons instead display weaker adaptation and would instead provide information about the context in which these statistics occur. We propose that such a scheme is necessary for decoding by higher brain structures. PMID:27349635

  2. Landsat TM image feature extraction and analysis of algal bloom in Taihu Lake

    NASA Astrophysics Data System (ADS)

    Wei, Yuchun; Chen, Wei

    2008-04-01

    This study developed an approach to the extraction and characterization of blue-green algal blooms of the study area Taihu Lake of China with the Landsat 5 TM imagery. Spectral feature of typical material within Taihu Lake were first compared, and the most sensitive spectral bands to blue-green algal blooms determined. Eight spectral indices were then designed using multiple TM spectral bands in order to maximize spectral contrast of different materials. The spectral curves describing the variation of reflectance at individual bands with the spectral indices were plotted, and the TM imagery was segmented using as thresholds the step-jumping points of the reflectance curves. The results indicate that the proposed multiple band-based spectral index NDAI2 (NDAI2 = (B4-B1)*(B5-B3)/(B4+B5+B1+B3) performed better than traditional vegetation indices NDVI and RVI in the extraction of blue-green algal information. In addition, this study indicates that the image segmentation using the points where reflectance has a sudden change resulted in a robust result, as well as a good applicability.

  3. Automated feature extraction and spatial organization of seafloor pockmarks, Belfast Bay, Maine, USA

    USGS Publications Warehouse

    Andrews, B.D.; Brothers, L.L.; Barnhardt, W.A.

    2010-01-01

    Seafloor pockmarks occur worldwide and may represent millions of m3 of continental shelf erosion, but few numerical analyses of their morphology and spatial distribution of pockmarks exist. We introduce a quantitative definition of pockmark morphology and, based on this definition, propose a three-step geomorphometric method to identify and extract pockmarks from high-resolution swath bathymetry. We apply this GIS-implemented approach to 25km2 of bathymetry collected in the Belfast Bay, Maine USA pockmark field. Our model extracted 1767 pockmarks and found a linear pockmark depth-to-diameter ratio for pockmarks field-wide. Mean pockmark depth is 7.6m and mean diameter is 84.8m. Pockmark distribution is non-random, and nearly half of the field's pockmarks occur in chains. The most prominent chains are oriented semi-normal to the steepest gradient in Holocene sediment thickness. A descriptive model yields field-wide spatial statistics indicating that pockmarks are distributed in non-random clusters. Results enable quantitative comparison of pockmarks in fields worldwide as well as similar concave features, such as impact craters, dolines, or salt pools. ?? 2010.

  4. Decision tree for smart feature extraction from sleep HR in bipolar patients.

    PubMed

    Migliorini, Matteo; Mariani, Sara; Bianchi, Anna M

    2013-01-01

    The aim of this work is the creation of a completely automatic method for the extraction of informative parameters from peripheral signals recorded through a sensorized T-shirt. The acquired data belong to patients affected from bipolar disorder, and consist of RR series, body movements and activity type. The extracted features, i.e. linear and non-linear HRV parameters in the time domain, HRV parameters in the frequency domain, and parameters indicative of the sleep quality, profile and fragmentation, are of interest for the automatic classification of the clinical mood state. The analysis of this dataset, which is to be performed online and automatically, must address the problems related to the clinical protocol, which also includes a segment of recording in which the patient is awake, and to the nature of the device, which can be sensitive to movements and misplacement. Thus, the decision tree implemented in this study performs the detection and isolation of the sleep period, the elimination of corrupted recording segments and the checking of the minimum requirements of the signals for every parameter to be calculated. PMID:24110866

  5. Feature extraction and recognition for rolling element bearing fault utilizing short-time Fourier transform and non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Gao, Huizhong; Liang, Lin; Chen, Xiaoguang; Xu, Guanghua

    2015-01-01

    Due to the non-stationary characteristics of vibration signals acquired from rolling element bearing fault, the time-frequency analysis is often applied to describe the local information of these unstable signals smartly. However, it is difficult to classify the high dimensional feature matrix directly because of too large dimensions for many classifiers. This paper combines the concepts of time-frequency distribution(TFD) with non-negative matrix factorization(NMF), and proposes a novel TFD matrix factorization method to enhance representation and identification of bearing fault. Throughout this method, the TFD of a vibration signal is firstly accomplished to describe the localized faults with short-time Fourier transform(STFT). Then, the supervised NMF mapping is adopted to extract the fault features from TFD. Meanwhile, the fault samples can be clustered and recognized automatically by using the clustering property of NMF. The proposed method takes advantages of the NMF in the parts-based representation and the adaptive clustering. The localized fault features of interest can be extracted as well. To evaluate the performance of the proposed method, the 9 kinds of the bearing fault on a test bench is performed. The proposed method can effectively identify the fault severity and different fault types. Moreover, in comparison with the artificial neural network(ANN), NMF yields 99.3% mean accuracy which is much superior to ANN. This research presents a simple and practical resolution for the fault diagnosis problem of rolling element bearing in high dimensional feature space.

  6. Satellite mapping and automated feature extraction: Geographic information system-based change detection of the Antarctic coast

    NASA Astrophysics Data System (ADS)

    Kim, Kee-Tae

    Declassified Intelligence Satellite Photograph (DISP) data are important resources for measuring the geometry of the coastline of Antarctica. By using the state-of-art digital imaging technology, bundle block triangulation based on tie points and control points derived from a RADARSAT-1 Synthetic Aperture Radar (SAR) image mosaic and Ohio State University (OSU) Antarctic digital elevation model (DEM), the individual DISP images were accurately assembled into a map quality mosaic of Antarctica as it appeared in 1963. The new map is one of important benchmarks for gauging the response of the Antarctic coastline to changing climate. Automated coastline extraction algorithm design is the second theme of this dissertation. At the pre-processing stage, an adaptive neighborhood filtering was used to remove the film-grain noise while preserving edge features. At the segmentation stage, an adaptive Bayesian approach to image segmentation was used to split the DISP imagery into its homogenous regions, in which the fuzzy c-means clustering (FCM) technique and Gibbs random field (GRF) model were introduced to estimate the conditional and prior probability density functions. A Gaussian mixture model was used to estimate the reliable initial values for the FCM technique. At the post-processing stage, image object formation and labeling, removal of noisy image objects, and vectorization algorithms were sequentially applied to segmented images for extracting a vector representation of coastlines. Results were presented that demonstrate the effectiveness of the algorithm in segmenting the DISP data. In the cases of cloud cover and little contrast scenes, manual editing was carried out based on intermediate image processing and visual inspection in comparison of old paper maps. Through a geographic information system (GIS), the derived DISP coastline data were integrated with earlier and later data to assess continental scale changes in the Antarctic coast. Computing the area of

  7. Poststroke Hemiparesis Impairs the Rate but not Magnitude of Adaptation of Spatial and Temporal Locomotor Features

    PubMed Central

    Savin, Douglas N.; Tseng, Shih-Chiao; Whitall, Jill; Morton, Susanne M.

    2015-01-01

    Background Persons with stroke and hemiparesis walk with a characteristic pattern of spatial and temporal asymmetry that is resistant to most traditional interventions. It was recently shown in nondisabled persons that the degree of walking symmetry can be readily altered via locomotor adaptation. However, it is unclear whether stroke-related brain damage affects the ability to adapt spatial or temporal gait symmetry. Objective Determine whether locomotor adaptation to a novel swing phase perturbation is impaired in persons with chronic stroke and hemiparesis. Methods Participants with ischemic stroke (14) and nondisabled controls (12) walked on a treadmill before, during, and after adaptation to a unilateral perturbing weight that resisted forward leg movement. Leg kinematics were measured bilaterally, including step length and single-limb support (SLS) time symmetry, limb angle center of oscillation, and interlimb phasing, and magnitude of “initial” and “late” locomotor adaptation rates were determined. Results All participants had similar magnitudes of adaptation and similar initial adaptation rates both spatially and temporally. All 14 participants with stroke and baseline asymmetry temporarily walked with improved SLS time symmetry after adaptation. However, late adaptation rates poststroke were decreased (took more strides to achieve adaptation) compared with controls. Conclusions Mild to moderate hemiparesis does not interfere with the initial acquisition of novel symmetrical gait patterns in both the spatial and temporal domains, though it does disrupt the rate at which “late” adaptive changes are produced. Impairment of the late, slow phase of learning may be an important rehabilitation consideration in this patient population. PMID:22367915

  8. Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor.

    PubMed

    Brandli, Christian; Mantel, Thomas A; Hutter, Marco; Höpflinger, Markus A; Berner, Raphael; Siegwart, Roland; Delbruck, Tobi

    2013-01-01

    Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm. PMID:24478619

  9. Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor.

    PubMed

    Brandli, Christian; Mantel, Thomas A; Hutter, Marco; Höpflinger, Markus A; Berner, Raphael; Siegwart, Roland; Delbruck, Tobi

    2013-01-01

    Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm.

  10. Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor

    PubMed Central

    Brandli, Christian; Mantel, Thomas A.; Hutter, Marco; Höpflinger, Markus A.; Berner, Raphael; Siegwart, Roland; Delbruck, Tobi

    2014-01-01

    Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm. PMID:24478619

  11. Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images

    NASA Astrophysics Data System (ADS)

    Harvey, Neal R.; Levenson, Richard M.; Rimm, David L.

    2003-05-01

    Recent developments in imaging technology mean that it is now possible to obtain high-resolution histological image data at multiple wavelengths. This allows pathologists to image specimens over a full spectrum, thereby revealing (often subtle) distinctions between different types of tissue. With this type of data, the spectral content of the specimens, combined with quantitative spatial feature characterization may make it possible not only to identify the presence of an abnormality, but also to classify it accurately. However, such are the quantities and complexities of these data, that without new automated techniques to assist in the data analysis, the information contained in the data will remain inaccessible to those who need it. We investigate the application of a recently developed system for the automated analysis of multi-/hyper-spectral satellite image data to the problem of cancer detection from multispectral histopathology image data. The system provides a means for a human expert to provide training data simply by highlighting regions in an image using a computer mouse. Application of these feature extraction techniques to examples of both training and out-of-training-sample data demonstrate that these, as yet unoptimized, techniques already show promise in the discrimination between benign and malignant cells from a variety of samples.

  12. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  13. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  14. A novel feature extracting method of QRS complex classification for mobile ECG signals

    NASA Astrophysics Data System (ADS)

    Zhu, Lingyun; Wang, Dong; Huang, Xianying; Wang, Yue

    2007-12-01

    The conventional classification parameters of QRS complex suffer from larger activity rang of patients and lower signal to noise ratio in mobile cardiac telemonitoring system and can not meet the identification needs of ECG signal. Based on individual sinus heart rhythm template built with mobile ECG signals in time window, we present semblance index to extract the classification features of QRS complex precisely and expeditiously. Relative approximation r2 and absolute error r3 are used as estimating parameters of semblance between testing QRS complex and template. The evaluate parameters corresponding to QRS width and types are demonstrated to choose the proper index. The results show that 99.99 percent of the QRS complex for sinus and superventricular ECG signals can be distinguished through r2 but its average accurate ratio is only 46.16%. More than 97.84 percent of QRS complexes are identified using r3 but its accurate ratio to the sinus and superventricular is not better than r2. By the feature parameter of width, only 42.65 percent of QRS complexes are classified correctly, but its accurate ratio to the ventricular is superior to r2. To combine the respective superiority of three parameters, a nonlinear weighing computation of QRS width, r2 and r3 is introduced and the total classification accuracy up to 99.48% by combing indexes.

  15. A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.

    PubMed

    Suk, Heung-Il; Lee, Seong-Whan

    2013-02-01

    As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.

  16. Microscopic feature extraction from optical sections of contracting cardiac muscle cells recorded at high speed

    NASA Astrophysics Data System (ADS)

    Roos, Kenneth P.; Lake, David S.; Lubell, Bradford A.

    1991-05-01

    The rapid motion of microscopic features such as the cross-striations of contracting cardiac muscle cells are difficult to capture with conventional RS-170 video systems and image processing approaches. In this report, efforts to extract, enhance and analyze striation data from widefield optical sections of single contracting cells recorded with a charge-coupled device (CCD) video camera modified for high-speed RS-170 compatible operation are described. Each video field from the camera provides four 1/4 height images separated by 4 ms in time for a 240 Hz image acquisition rate. Data are continuously recorded on S-VHS video tape during each experiment. Selected image sequences are digitized field by field and stored in a computer system under automated software control. The four individual images in each video field are separated, geometrically corrected for time base error, and reassembled as a single sequence of images for interpretable visualization. The images are then processed with digital filters and gray scale expansion to preferentially enhance the cross-striations and minimize out of focus features. Regions within each image containing striations are identified and their positions determined and followed during the contraction cycle to obtain individual, regional and cellular sarcomere dynamics. This approach permits the critical evaluation of the magnitude, time course and uniformity of contractile function throughout the volume of a single cell with higher temporal and spatial resolutions than previously possible.

  17. ActiveTutor: Towards More Adaptive Features in an E-Learning Framework

    ERIC Educational Resources Information Center

    Fournier, Jean-Pierre; Sansonnet, Jean-Paul

    2008-01-01

    Purpose: This paper aims to sketch the emerging notion of auto-adaptive software when applied to e-learning software. Design/methodology/approach: The study and the implementation of the auto-adaptive architecture are based on the operational framework "ActiveTutor" that is used for teaching the topic of computer science programming in first-grade…

  18. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    NASA Astrophysics Data System (ADS)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  19. Extraction of time and frequency features from grip force rates during dexterous manipulation.

    PubMed

    Mojtahedi, Keivan; Fu, Qiushi; Santello, Marco

    2015-05-01

    The time course of grip force from object contact to onset of manipulation has been extensively studied to gain insight into the underlying control mechanisms. Of particular interest to the motor neuroscience and clinical communities is the phenomenon of bell-shaped grip force rate (GFR) that has been interpreted as indicative of feedforward force control. However, this feature has not been assessed quantitatively. Furthermore, the time course of grip force may contain additional features that could provide insight into sensorimotor control processes. In this study, we addressed these questions by validating and applying two computational approaches to extract features from GFR in humans: 1) fitting a Gaussian function to GFR and quantifying the goodness of the fit [root-mean-square error, (RMSE)]; and 2) continuous wavelet transform (CWT), where we assessed the correlation of the GFR signal with a Mexican Hat function. Experiment 1 consisted of a classic pseudorandomized presentation of object mass (light or heavy), where grip forces developed to lift a mass heavier than expected are known to exhibit corrective responses. For Experiment 2, we applied our two techniques to analyze grip force exerted for manipulating an inverted T-shaped object whose center of mass was changed across blocks of consecutive trials. For both experiments, subjects were asked to grasp the object at either predetermined or self-selected grasp locations ("constrained" and "unconstrained" task, respectively). Experiment 1 successfully validated the use of RMSE and CWT as they correctly distinguished trials with versus without force corrective responses. RMSE and CWT also revealed that grip force is characterized by more feedback-driven corrections when grasping at self-selected contact points. Future work will examine the application of our analytical approaches to a broader range of tasks, e.g., assessment of recovery of sensorimotor function following clinical intervention, interlimb

  20. Functional source separation and hand cortical representation for a brain–computer interface feature extraction

    PubMed Central

    Tecchio, Franca; Porcaro, Camillo; Barbati, Giulia; Zappasodi, Filippo

    2007-01-01

    A brain–computer interface (BCI) can be defined as any system that can track the person's intent which is embedded in his/her brain activity and, from it alone, translate the intention into commands of a computer. Among the brain signal monitoring systems best suited for this challenging task, electroencephalography (EEG) and magnetoencephalography (MEG) are the most realistic, since both are non-invasive, EEG is portable and MEG could provide more specific information that could be later exploited also through EEG signals. The first two BCI steps require set up of the appropriate experimental protocol while recording the brain signal and then to extract interesting features from the recorded cerebral activity. To provide information useful in these BCI stages, our aim is to provide an overview of a new procedure we recently developed, named functional source separation (FSS). As it comes from the blind source separation algorithms, it exploits the most valuable information provided by the electrophysiological techniques, i.e. the waveform signal properties, remaining blind to the biophysical nature of the signal sources. FSS returns the single trial source activity, estimates the time course of a neuronal pool along different experimental states on the basis of a specific functional requirement in a specific time period, and uses the simulated annealing as the optimization procedure allowing the exploit of functional constraints non-differentiable. Moreover, a minor section is included, devoted to information acquired by MEG in stroke patients, to guide BCI applications aiming at sustaining motor behaviour in these patients. Relevant BCI features – spatial and time-frequency properties – are in fact altered by a stroke in the regions devoted to hand control. Moreover, a method to investigate the relationship between sensory and motor hand cortical network activities is described, providing information useful to develop BCI feedback control systems. This

  1. A data-driven feature extraction framework for predicting the severity of condition of congestive heart failure patients.

    PubMed

    Sideris, Costas; Alshurafa, Nabil; Pourhomayoun, Mohammad; Shahmohammadi, Farhad; Samy, Lauren; Sarrafzadeh, Majid

    2015-01-01

    In this paper, we propose a novel methodology for utilizing disease diagnostic information to predict severity of condition for Congestive Heart Failure (CHF) patients. Our methodology relies on a novel, clustering-based, feature extraction framework using disease diagnostic information. To reduce the dimensionality we identify disease clusters using cooccurence frequencies. We then utilize these clusters as features to predict patient severity of condition. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 patients. We compare our cluster-based feature set with another that incorporates the Charlson comorbidity score as a feature and demonstrate an accuracy improvement of up to 14% in the predictability of the severity of condition. PMID:26736808

  2. Built-up Areas Extraction in High Resolution SAR Imagery based on the method of Multiple Feature Weighted Fusion

    NASA Astrophysics Data System (ADS)

    Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.

    2015-06-01

    Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.

  3. A method of evolving novel feature extraction algorithms for detecting buried objects in FLIR imagery using genetic programming

    NASA Astrophysics Data System (ADS)

    Paino, A.; Keller, J.; Popescu, M.; Stone, K.

    2014-06-01

    In this paper we present an approach that uses Genetic Programming (GP) to evolve novel feature extraction algorithms for greyscale images. Our motivation is to create an automated method of building new feature extraction algorithms for images that are competitive with commonly used human-engineered features, such as Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG). The evolved feature extraction algorithms are functions defined over the image space, and each produces a real-valued feature vector of variable length. Each evolved feature extractor breaks up the given image into a set of cells centered on every pixel, performs evolved operations on each cell, and then combines the results of those operations for every cell using an evolved operator. Using this method, the algorithm is flexible enough to reproduce both LBP and HOG features. The dataset we use to train and test our approach consists of a large number of pre-segmented image "chips" taken from a Forward Looking Infrared Imagery (FLIR) camera mounted on the hood of a moving vehicle. The goal is to classify each image chip as either containing or not containing a buried object. To this end, we define the fitness of a candidate solution as the cross-fold validation accuracy of the features generated by said candidate solution when used in conjunction with a Support Vector Machine (SVM) classifier. In order to validate our approach, we compare the classification accuracy of an SVM trained using our evolved features with the accuracy of an SVM trained using mainstream feature extraction algorithms, including LBP and HOG.

  4. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    PubMed

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can

  5. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2011-12-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  6. Neural network-based brain tissue segmentation in MR images using extracted features from intraframe coding in H.264

    NASA Astrophysics Data System (ADS)

    Jafari, Mehdi; Kasaei, Shohreh

    2012-01-01

    Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard, H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the classification accuracy and computational complexity terms. The results show that the proposed technique is more robust and effective with low computational complexity compared to other recent works.

  7. Approximation-based common principal component for feature extraction in multi-class brain-computer interfaces.

    PubMed

    Hoang, Tuan; Tran, Dat; Huang, Xu

    2013-01-01

    Common Spatial Pattern (CSP) is a state-of-the-art method for feature extraction in Brain-Computer Interface (BCI) systems. However it is designed for 2-class BCI classification problems. Current extensions of this method to multiple classes based on subspace union and covariance matrix similarity do not provide a high performance. This paper presents a new approach to solving multi-class BCI classification problems by forming a subspace resembled from original subspaces and the proposed method for this approach is called Approximation-based Common Principal Component (ACPC). We perform experiments on Dataset 2a used in BCI Competition IV to evaluate the proposed method. This dataset was designed for motor imagery classification with 4 classes. Preliminary experiments show that the proposed ACPC feature extraction method when combining with Support Vector Machines outperforms CSP-based feature extraction methods on the experimental dataset.

  8. Segmentation and feature extraction of cervical spine x-ray images

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1999-05-01

    As part of an R&D project in mixed text/image database design, the National Library of Medicine has archived a collection of 17,000 digitized x-ray images of the cervical and lumbar spine which were collected as part of the second National Health and Nutrition Examination Survey (NHANES II). To make this image data available and usable to a wide audience, we are investigating techniques for indexing the image content by automated or semi-automated means. Indexing of the images by features of interest to researchers in spine disease and structure requires effective segmentation of the vertebral anatomy. This paper describes work in progress toward this segmentation of the cervical spine images into anatomical components of interest, including anatomical landmarks for vertebral location, and segmentation and identification of individual vertebrae. Our work includes developing a reliable method for automatically fixing an anatomy-based coordinate system in the images, and work to adaptively threshold the images, using methods previously applied by researchers in cardioangiography. We describe the motivation for our work and present our current results in both areas.

  9. Sensor-based vibration signal feature extraction using an improved composite dictionary matching pursuit algorithm.

    PubMed

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-09-09

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  10. Sensor-Based Vibration Signal Feature Extraction Using an Improved Composite Dictionary Matching Pursuit Algorithm

    PubMed Central

    Cui, Lingli; Wu, Na; Wang, Wenjing; Kang, Chenhui

    2014-01-01

    This paper presents a new method for a composite dictionary matching pursuit algorithm, which is applied to vibration sensor signal feature extraction and fault diagnosis of a gearbox. Three advantages are highlighted in the new method. First, the composite dictionary in the algorithm has been changed from multi-atom matching to single-atom matching. Compared to non-composite dictionary single-atom matching, the original composite dictionary multi-atom matching pursuit (CD-MaMP) algorithm can achieve noise reduction in the reconstruction stage, but it cannot dramatically reduce the computational cost and improve the efficiency in the decomposition stage. Therefore, the optimized composite dictionary single-atom matching algorithm (CD-SaMP) is proposed. Second, the termination condition of iteration based on the attenuation coefficient is put forward to improve the sparsity and efficiency of the algorithm, which adjusts the parameters of the termination condition constantly in the process of decomposition to avoid noise. Third, composite dictionaries are enriched with the modulation dictionary, which is one of the important structural characteristics of gear fault signals. Meanwhile, the termination condition of iteration settings, sub-feature dictionary selections and operation efficiency between CD-MaMP and CD-SaMP are discussed, aiming at gear simulation vibration signals with noise. The simulation sensor-based vibration signal results show that the termination condition of iteration based on the attenuation coefficient enhances decomposition sparsity greatly and achieves a good effect of noise reduction. Furthermore, the modulation dictionary achieves a better matching effect compared to the Fourier dictionary, and CD-SaMP has a great advantage of sparsity and efficiency compared with the CD-MaMP. The sensor-based vibration signals measured from practical engineering gearbox analyses have further shown that the CD-SaMP decomposition and reconstruction algorithm

  11. Antioxidant capacity of leaf extracts from two Stevia rebaudiana Bertoni varieties adapted to cultivation in Mexico.

    PubMed

    Ruiz Ruiz, Jorge Carlos; Moguel Ordoñez, Yolanda Beatriz; Matus Basto, Ángel; Segura Campos, Maira Rubi

    2014-09-12

    The recent introduction of the cultivation of Stevia rebaudiana Bertoni in Mexico has gained interest for its potential use as a non-caloric sweetener, but some other properties of this plant require studies. Extracts from two varieties of S. rebaudiana Bertoni adapted to cultivation in Mexico were screened for their content of some phytochemicals and antioxidant properties. Total pigments, total phenolic and flavonoids contents of the extracts ranged between 17.7-24.3 mg/g, 28.7-28.4 mg/g, and 39.3-36.7 mg/g, respectively. The variety "Criolla" exhibited higher contents of pigments and flavonoids. Trolox equivalent antioxidant capacity ranged between 618.5-623.7 mM/mg and DPPH decolorization assay ranged between 86.4-84.3%, no significant differences were observed between varieties. Inhibition of β-carotene bleaching ranged between 62.3-77.9%, with higher activity in the variety "Criolla". Reducing power ranged between 85.2-86% and the chelating activity ranged between 57.3-59.4% for Cu²⁺ and between 52.2-54.4% for Fe²⁺, no significant differences were observed between varieties. In conclusion, the results of this study showed that polar compounds obtained during the extraction like chlorophylls, carotenoids, phenolic compounds and flavonoids contribute to the antioxidative activity measured. The leaves of S. rebaudiana Bertoni could be used not only as a source of non-caloric sweeteners but also naturally occurring antioxidants.

  12. Role of features and categories in the organization of object knowledge: Evidence from adaptation fMRI.

    PubMed

    Geng, Jingyi; Schnur, Tatiana T

    2016-05-01

    There are two general views regarding the organization of object knowledge. The feature-based view assumes that object knowledge is grounded in a widely distributed neural network in terms of sensory/function features (e.g., Warrington & Shallice, 1984), while the category-based view assumes in addition that object knowledge is organized by taxonomic and thematic categories (e.g., Schwartz et al., 2011). Using a functional magnetic resonance imaging (fMRI) adaptation paradigm, we compared predictions from the feature- and category-based views by examining the neural substrates recruited as subjects read word pairs that were identical, taxonomically related, thematically related or unrelated while controlling for the function features involved across the two categories. We improved upon previous study designs and employed an fMRI adaptation task, obtaining results overall consistent with both the category-based and feature-based views. Consistent with the category-based view, we observed for both hypothesized regions of interest (ROI) and exploratory (whole-brain analyses) reduced activity in the left anterior temporal lobe (ATL) for taxonomically related versus unrelated word pairs, and for the exploratory analysis only, reduced activity in the right ATL. In addition, the exploratory analyses revealed reduced activity in the left temporo-parietal junction (TPJ) for thematically related versus unrelated word pairs. Consistent with the feature-based view, we found in the exploratory analyses that activity reduced in the bilateral precentral gyri (i.e., function regions) including part of premotor cortex as the function relatedness ratings increased. However, we did not find a relationship between adaptation effects in the bilateral ATLs and left TPJ and corresponding ratings of taxonomic/thematic relationships suggesting that the adaptation effects may potentially not reflect aspects of taxonomy that have been traditionally assumed. Together, our findings indicate

  13. Sparsity-enabled signal decomposition using tunable Q-factor wavelet transform for fault feature extraction of gearbox

    NASA Astrophysics Data System (ADS)

    Cai, Gaigai; Chen, Xuefeng; He, Zhengjia

    2013-12-01

    Localized faults in gearboxes tend to result in periodic shocks and thus arouse periodic responses in vibration signals. Feature extraction has always been a key problem for localized fault diagnosis. This paper proposes a new fault feature extraction technique for gearboxes by using sparsity-enabled signal decomposition method. The sparsity-enabled signal decomposition method separates signals based on the oscillatory behavior of the signal rather than the frequency or scale. Thus, the fault feature can be nonlinearly extracted from vibration signals. During the implementation of the proposed method, tunable Q-factor wavelet transform, for which the Q-factor can be easily specified, is adopted to represent vibration signals in a sparse way, and then morphological component analysis (MCA) is employed to estimate and separate the distinct components. The corresponding optimization problem of MCA is solved by the split augmented Lagrangian shrinkage algorithm (SALSA). With the proposed method, vibration signals of the faulty gearbox can be nonlinearly decomposed into high-oscillatory component and low-oscillatory component which is the fault feature of gearboxes. To evaluate the performance of the proposed method, this paper investigates the effect of two parameters pertinent to MCA and SALSA: the Lagrange multiplier and the penalty parameter. The effectiveness of the proposed method is verified by both the simulated and practical gearbox vibration signals. Results show the proposed method outperforms empirical mode decomposition and spectral kurtosis in extracting fault features of gearboxes.

  14. Oil Spill Detection by SAR Images: Dark Formation Detection, Feature Extraction and Classification Algorithms

    PubMed Central

    Topouzelis, Konstantinos N.

    2008-01-01

    This paper provides a comprehensive review of the use of Synthetic Aperture Radar images (SAR) for detection of illegal discharges from ships. It summarizes the current state of the art, covering operational and research aspects of the application. Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they seriously effect fragile marine and coastal ecosystem. The amount of pollutant discharges and associated effects on the marine environment are important parameters in evaluating sea water quality. Satellite images can improve the possibilities for the detection of oil spills as they cover large areas and offer an economical and easier way of continuous coast areas patrolling. SAR images have been widely used for oil spill detection. The present paper gives an overview of the methodologies used to detect oil spills on the radar images. In particular we concentrate on the use of the manual and automatic approaches to distinguish oil spills from other natural phenomena. We discuss the most common techniques to detect dark formations on the SAR images, the features which are extracted from the detected dark formations and the most used classifiers. Finally we conclude with discussion of suggestions for further research. The references throughout the review can serve as starting point for more intensive studies on the subject.

  15. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  16. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  17. Signals features extraction in liquid-gas flow measurements using gamma densitometry. Part 2: frequency domain

    NASA Astrophysics Data System (ADS)

    Hanus, Robert; Zych, Marcin; Petryka, Leszek; Jaszczur, Marek; Hanus, Paweł

    2016-03-01

    Knowledge of the structure of a flow is really significant for the proper conduct a number of industrial processes. In this case a description of a two-phase flow regimes is possible by use of the time-series analysis e.g. in frequency domain. In this article the classical spectral analysis based on Fourier Transform (FT) and Short-Time Fourier Transform (STFT) were applied for analysis of signals obtained for water-air flow using gamma ray absorption. The presented method was illustrated by use data collected in experiments carried out on the laboratory hydraulic installation with a horizontal pipe of 4.5 m length and inner diameter of 30 mm equipped with two 241Am radioactive sources and scintillation probes with NaI(Tl) crystals. Stochastic signals obtained from detectors for plug, bubble, and transitional plug - bubble flows were considered in this work. The recorded raw signals were analyzed and several features in the frequency domain were extracted using autospectral density function (ADF), cross-spectral density function (CSDF), and the STFT spectrogram. In result of a detail analysis it was found that the most promising to recognize of the flow structure are: maximum value of the CSDF magnitude, sum of the CSDF magnitudes in the selected frequency range, and the maximum value of the sum of selected amplitudes of STFT spectrogram.

  18. Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity.

    PubMed

    Bichler, Olivier; Querlioz, Damien; Thorpe, Simon J; Bourgoin, Jean-Philippe; Gamrat, Christian

    2012-08-01

    A biologically inspired approach to learning temporally correlated patterns from a spiking silicon retina is presented. Spikes are generated from the retina in response to relative changes in illumination at the pixel level and transmitted to a feed-forward spiking neural network. Neurons become sensitive to patterns of pixels with correlated activation times, in a fully unsupervised scheme. This is achieved using a special form of Spike-Timing-Dependent Plasticity which depresses synapses that did not recently contribute to the post-synaptic spike activation, regardless of their activation time. Competitive learning is implemented with lateral inhibition. When tested with real-life data, the system is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway, after only 10 min of traffic learning. Complete trajectories can be learned with a 98% detection rate using a second layer, still with unsupervised learning, and the system may be used as a car counter. The proposed neural network is extremely robust to noise and it can tolerate a high degree of synaptic and neuronal variability with little impact on performance. Such results show that a simple biologically inspired unsupervised learning scheme is capable of generating selectivity to complex meaningful events on the basis of relatively little sensory experience.

  19. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  20. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  1. Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors.

    PubMed

    Spiers, Adam J; Liarokapis, Minas V; Calli, Berk; Dollar, Aaron M

    2016-01-01

    Classical robotic approaches to tactile object identification often involve rigid mechanical grippers, dense sensor arrays, and exploratory procedures (EPs). Though EPs are a natural method for humans to acquire object information, evidence also exists for meaningful tactile property inference from brief, non-exploratory motions (a 'haptic glance'). In this work, we implement tactile object identification and feature extraction techniques on data acquired during a single, unplanned grasp with a simple, underactuated robot hand equipped with inexpensive barometric pressure sensors. Our methodology utilizes two cooperating schemes based on an advanced machine learning technique (random forests) and parametric methods that estimate object properties. The available data is limited to actuator positions (one per two link finger) and force sensors values (eight per finger). The schemes are able to work both independently and collaboratively, depending on the task scenario. When collaborating, the results of each method contribute to the other, improving the overall result in a synergistic fashion. Unlike prior work, the proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations. Due to these factors, the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads. PMID:26829804

  2. Insights into the molecular basis of piezophilic adaptation: Extraction of piezophilic signatures.

    PubMed

    Nath, Abhigyan; Subbiah, Karthikeyan

    2016-02-01

    Piezophiles are the organisms which can successfully survive at extreme pressure conditions. However, the molecular basis of piezophilic adaptation is still poorly understood. Analysis of the protein sequence adjustments that had taken place during evolution can help to reveal the sequence adaptation parameters responsible for protein functional and structural adaptation at such high pressure conditions. In this current work we have used SVM classifier for filtering strong instances and generated human interpretable rules from these strong instances by using the PART algorithm. These generated rules were analyzed for getting insights into the molecular signature patterns present in the piezophilic proteins. The experiments were performed on three different temperature ranges piezophilic groups, namely psychrophilic-piezophilic, mesophilic-piezophilic, and thermophilic-piezophilic for the detailed comparative study. The best classification results were obtained as we move up the temperature range from psychrophilic-piezophilic to thermophilic-piezophilic. Based on the physicochemical classification of amino acids and using feature ranking algorithms, hydrophilic and polar amino acid groups have higher discriminative ability for psychrophilic-piezophilic and mesophilic-piezophilic groups along with hydrophobic and nonpolar amino acids for the thermophilic-piezophilic groups. We also observed an overrepresentation of polar, hydrophilic and small amino acid groups in the discriminatory rules of all the three temperature range piezophiles along with aliphatic, nonpolar and hydrophobic groups in the mesophilic-piezophilic and thermophilic-piezophilic groups.

  3. Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Keller, Brenton; Cunefare, David; Grewal, Dilraj S.; Mahmoud, Tamer H.; Izatt, Joseph A.; Farsiu, Sina

    2016-07-01

    We introduce a metric in graph search and demonstrate its application for segmenting retinal optical coherence tomography (OCT) images of macular pathology. Our proposed "adjusted mean arc length" (AMAL) metric is an adaptation of the lowest mean arc length search technique for automated OCT segmentation. We compare this method to Dijkstra's shortest path algorithm, which we utilized previously in our popular graph theory and dynamic programming segmentation technique. As an illustrative example, we show that AMAL-based length-adaptive segmentation outperforms the shortest path in delineating the retina/vitreous boundary of patients with full-thickness macular holes when compared with expert manual grading.

  4. Artificial immune system based on adaptive clonal selection for feature selection and parameters optimisation of support vector machines

    NASA Astrophysics Data System (ADS)

    Sadat Hashemipour, Maryam; Soleimani, Seyed Ali

    2016-01-01

    Artificial immune system (AIS) algorithm based on clonal selection method can be defined as a soft computing method inspired by theoretical immune system in order to solve science and engineering problems. Support vector machine (SVM) is a popular pattern classification method with many diverse applications. Kernel parameter setting in the SVM training procedure along with the feature selection significantly impacts on the classification accuracy rate. In this study, AIS based on Adaptive Clonal Selection (AISACS) algorithm has been used to optimise the SVM parameters and feature subset selection without degrading the SVM classification accuracy. Several public datasets of University of California Irvine machine learning (UCI) repository are employed to calculate the classification accuracy rate in order to evaluate the AISACS approach then it was compared with grid search algorithm and Genetic Algorithm (GA) approach. The experimental results show that the feature reduction rate and running time of the AISACS approach are better than the GA approach.

  5. Program of Adaptation Assistance in Foster Families and Particular Features of Its Implementation

    ERIC Educational Resources Information Center

    Zakirova, Venera G.; Gaysina, Guzel I.; Zhumabaeva, Asia

    2015-01-01

    Relevance of the problem stated in the article, conditioned by the fact that the successful adaptation of orphans in a foster family requires specialized knowledge and skills, as well as the need of professional support. Therefore, this article aims at substantiation of the effectiveness of the developed pilot program psycho-pedagogical support of…

  6. Computer extracted texture features on T2w MRI to predict biochemical recurrence following radiation therapy for prostate cancer

    NASA Astrophysics Data System (ADS)

    Ginsburg, Shoshana B.; Rusu, Mirabela; Kurhanewicz, John; Madabhushi, Anant

    2014-03-01

    In this study we explore the ability of a novel machine learning approach, in conjunction with computer-extracted features describing prostate cancer morphology on pre-treatment MRI, to predict whether a patient will develop biochemical recurrence within ten years of radiation therapy. Biochemical recurrence, which is characterized by a rise in serum prostate-specific antigen (PSA) of at least 2 ng/mL above the nadir PSA, is associated with increased risk of metastasis and prostate cancer-related mortality. Currently, risk of biochemical recurrence is predicted by the Kattan nomogram, which incorporates several clinical factors to predict the probability of recurrence-free survival following radiation therapy (but has limited prediction accuracy). Semantic attributes on T2w MRI, such as the presence of extracapsular extension and seminal vesicle invasion and surrogate measure- ments of tumor size, have also been shown to be predictive of biochemical recurrence risk. While the correlation between biochemical recurrence and factors like tumor stage, Gleason grade, and extracapsular spread are well- documented, it is less clear how to predict biochemical recurrence in the absence of extracapsular spread and for small tumors fully contained in the capsule. Computer{extracted texture features, which quantitatively de- scribe tumor micro-architecture and morphology on MRI, have been shown to provide clues about a tumor's aggressiveness. However, while computer{extracted features have been employed for predicting cancer presence and grade, they have not been evaluated in the context of predicting risk of biochemical recurrence. This work seeks to evaluate the role of computer-extracted texture features in predicting risk of biochemical recurrence on a cohort of sixteen patients who underwent pre{treatment 1.5 Tesla (T) T2w MRI. We extract a combination of first-order statistical, gradient, co-occurrence, and Gabor wavelet features from T2w MRI. To identify which of these

  7. MARGA: multispectral adaptive region growing algorithm for brain extraction on axial MRI.

    PubMed

    Roura, Eloy; Oliver, Arnau; Cabezas, Mariano; Vilanova, Joan C; Rovira, Alex; Ramió-Torrentà, Lluís; Lladó, Xavier

    2014-02-01

    Brain extraction, also known as skull stripping, is one of the most important preprocessing steps for many automatic brain image analysis. In this paper we present a new approach called Multispectral Adaptive Region Growing Algorithm (MARGA) to perform the skull stripping process. MARGA is based on a region growing (RG) algorithm which uses the complementary information provided by conventional magnetic resonance images (MRI) such as T1-weighted and T2-weighted to perform the brain segmentation. MARGA can be seen as an extension of the skull stripping method proposed by Park and Lee (2009) [1], enabling their use in both axial views and low quality images. Following the same idea, we first obtain seed regions that are then spread using a 2D RG algorithm which behaves differently in specific zones of the brain. This adaptation allows to deal with the fact that middle MRI slices have better image contrast between the brain and non-brain regions than superior and inferior brain slices where the contrast is smaller. MARGA is validated using three different databases: 10 simulated brains from the BrainWeb database; 2 data sets from the National Alliance for Medical Image Computing (NAMIC) database, the first one consisting in 10 normal brains and 10 brains of schizophrenic patients acquired with a 3T GE scanner, and the second one consisting in 5 brains from lupus patients acquired with a 3T Siemens scanner; and 10 brains of multiple sclerosis patients acquired with a 1.5T scanner. We have qualitatively and quantitatively compared MARGA with the well-known Brain Extraction Tool (BET), Brain Surface Extractor (BSE) and Statistical Parametric Mapping (SPM) approaches. The obtained results demonstrate the validity of MARGA, outperforming the results of those standard techniques. PMID:24380649

  8. Extraction of Qualitative Features from Sensor Data Using Windowed Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fenando

    2003-01-01

    In this paper, we use Matlab to model the health monitoring of a system through the information gathered from sensors. This implies assessment of the condition of the system components. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of an element, a qualitative change, or a change due to a problem with another element in the network. For example, if one sensor indicates that the temperature in the tank has experienced a step change then a pressure sensor associated with the process in the tank should also experience a step change. The step up and step down as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system is allowed to run for a period equal to three time constant of the main process before changes occur. Then each point of the signal is selected with a trailing data collected previously. Two trailing lengths of data are selected, one equal to two time constants of the main process and the other equal to two time constants of the sensor disturbance. Next, the DC is removed from each set of data and then the data are passed through a window followed by calculation of spectra for each set. In order to extract features the signal power, peak, and spectrum are plotted vs time. The results indicate distinct shapes corresponding to each process. The study is also carried out for a number of Gaussian distributed noisy cases.

  9. Feature extraction of event-related potentials using wavelets: an application to human performance monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, L. J.; Shensa, M. J.

    1999-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.

  10. Feature Extraction of Event-Related Potentials Using Wavelets: An Application to Human Performance Monitoring

    NASA Technical Reports Server (NTRS)

    Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)

    1998-01-01

    This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.

  11. Cerebral Arteries Extraction using Level Set Segmentation and Adaptive Tracing for CT Angiography

    SciTech Connect

    Zhang Yong; Zhou Xiaobo; Srinivasan, Ranga; Wong, Stephen T. C.; Young, Geoff

    2007-11-02

    We propose an approach for extracting cerebral arteries from partial Computed Tomography Angiography (CTA). The challenges of extracting cerebral arteries from CTA come from the fact that arteries are usually surrounded by bones and veins in the lower portion of a CTA volume. There exists strong intensity-value overlap between vessels and surrounding objects. Besides, it is inappropriate to assume the 2D cross sections of arteries are circle or ellipse, especially for abnormal vessels. The navigation of the arteries could change suddenly in the 3D space. In this paper, a method based on level set segmentation is proposed to target this challenging problem. For the lower portion of a CTA volume, we use geodesic active contour method to detect cross section of arteries in the 2D space. The medial axis of the artery is obtained by adaptively tracking along its navigation path. This is done by finding the minimal cross section from cutting the arteries under different angles in the 3D spherical space. This method is highly automated, with minimum user input of providing only the starting point and initial navigation direction of the arteries of interests.

  12. Cerebral Arteries Extraction using Level Set Segmentation and Adaptive Tracing for CT Angiography

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Young, Geoff; Zhou, Xiaobo; Srinivasan, Ranga; Wong, Stephen T. C.

    2007-11-01

    We propose an approach for extracting cerebral arteries from partial Computed Tomography Angiography (CTA). The challenges of extracting cerebral arteries from CTA come from the fact that arteries are usually surrounded by bones and veins in the lower portion of a CTA volume. There exists strong intensity-value overlap between vessels and surrounding objects. Besides, it is inappropriate to assume the 2D cross sections of arteries are circle or ellipse, especially for abnormal vessels. The navigation of the arteries could change suddenly in the 3D space. In this paper, a method based on level set segmentation is proposed to target this challenging problem. For the lower portion of a CTA volume, we use geodesic active contour method to detect cross section of arteries in the 2D space. The medial axis of the artery is obtained by adaptively tracking along its navigation path. This is done by finding the minimal cross section from cutting the arteries under different angles in the 3D spherical space. This method is highly automated, with minimum user input of providing only the starting point and initial navigation direction of the arteries of interests.

  13. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques.

    PubMed

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2016-04-21

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.

  14. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques

    NASA Astrophysics Data System (ADS)

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E.; Lo, Yeh-Chi

    2016-04-01

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as  -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients.

  15. Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques.

    PubMed

    Chao, Ming; Wei, Jie; Li, Tianfang; Yuan, Yading; Rosenzweig, Kenneth E; Lo, Yeh-Chi

    2016-04-21

    We present a study of extracting respiratory signals from cone beam computed tomography (CBCT) projections within the framework of the Amsterdam Shroud (AS) technique. Acquired prior to the radiotherapy treatment, CBCT projections were preprocessed for contrast enhancement by converting the original intensity images to attenuation images with which the AS image was created. An adaptive robust z-normalization filtering was applied to further augment the weak oscillating structures locally. From the enhanced AS image, the respiratory signal was extracted using a two-step optimization approach to effectively reveal the large-scale regularity of the breathing signals. CBCT projection images from five patients acquired with the Varian Onboard Imager on the Clinac iX System Linear Accelerator (Varian Medical Systems, Palo Alto, CA) were employed to assess the proposed technique. Stable breathing signals can be reliably extracted using the proposed algorithm. Reference waveforms obtained using an air bellows belt (Philips Medical Systems, Cleveland, OH) were exported and compared to those with the AS based signals. The average errors for the enrolled patients between the estimated breath per minute (bpm) and the reference waveform bpm can be as low as -0.07 with the standard deviation 1.58. The new algorithm outperformed the original AS technique for all patients by 8.5% to 30%. The impact of gantry rotation on the breathing signal was assessed with data acquired with a Quasar phantom (Modus Medical Devices Inc., London, Canada) and found to be minimal on the signal frequency. The new technique developed in this work will provide a practical solution to rendering markerless breathing signal using the CBCT projections for thoracic and abdominal patients. PMID:27008349

  16. A multichannel nonlinear adaptive noise canceller based on generalized FLANN for fetal ECG extraction

    NASA Astrophysics Data System (ADS)

    Ma, Yaping; Xiao, Yegui; Wei, Guo; Sun, Jinwei

    2016-01-01

    In this paper, a multichannel nonlinear adaptive noise canceller (ANC) based on the generalized functional link artificial neural network (FLANN, GFLANN) is proposed for fetal electrocardiogram (FECG) extraction. A FIR filter and a GFLANN are equipped in parallel in each reference channel to respectively approximate the linearity and nonlinearity between the maternal ECG (MECG) and the composite abdominal ECG (AECG). A fast scheme is also introduced to reduce the computational cost of the FLANN and the GFLANN. Two (2) sets of ECG time sequences, one synthetic and one real, are utilized to demonstrate the improved effectiveness of the proposed nonlinear ANC. The real dataset is derived from the Physionet non-invasive FECG database (PNIFECGDB) including 55 multichannel recordings taken from a pregnant woman. It contains two subdatasets that consist of 14 and 8 recordings, respectively, with each recording being 90 s long. Simulation results based on these two datasets reveal, on the whole, that the proposed ANC does enjoy higher capability to deal with nonlinearity between MECG and AECG as compared with previous ANCs in terms of fetal QRS (FQRS)-related statistics and morphology of the extracted FECG waveforms. In particular, for the second real subdataset, the F1-measure results produced by the PCA-based template subtraction (TSpca) technique and six (6) single-reference channel ANCs using LMS- and RLS-based FIR filters, Volterra filter, FLANN, GFLANN, and adaptive echo state neural network (ESN a ) are 92.47%, 93.70%, 94.07%, 94.22%, 94.90%, 94.90%, and 95.46%, respectively. The same F1-measure statistical results from five (5) multi-reference channel ANCs (LMS- and RLS-based FIR filters, Volterra filter, FLANN, and GFLANN) for the second real subdataset turn out to be 94.08%, 94.29%, 94.68%, 94.91%, and 94.96%, respectively. These results indicate that the ESN a and GFLANN perform best, with the ESN a being slightly better than the GFLANN but about four times more

  17. Adaptive Segmentation and Feature Quantization of Sublingual Veins of Healthy Humans

    NASA Astrophysics Data System (ADS)

    Yan, Zifei; Li, Naimin

    The Sublingual Vein Diagnosis, one part of Tongue Diagnosis, plays an important role in deciding the healthy condition of humans. This paper focuses on establishing a feature quantization framework for the inspection of sublingual veins of healthy humans, composed of two parts: the segmentation of sublingual veins of healthy humans and the feature quantization of them. Firstly, a novel technique of sublingual vein segmentation is proposed here. Sublingual Vein Color Model, which combines the Bayesian Decision with CIEYxy color space, is established based on a large number of labeled sublingual images. Experiments prove that the proposed method performs well on the segmentation of images from healthy humans with weak color contrast between sublingual vein and tongue proper. And then, a chromatic system in conformity with diagnostic standard of Traditional Chinese Medicine doctors is established to describe the chromatic feature of sublingual veins. Experimental results show that the geometrical and chromatic features quantized by the proposed framework are properly consistent with the diagnostic standard summarized by TCM doctors for healthy humans.

  18. The sidebar template and extraction of invariant feature of calligraphy and painting seal

    NASA Astrophysics Data System (ADS)

    Hu, Zheng-kun; Bao, Hong; Lou, Hai-tao

    2009-07-01

    The paper propose a novel seal extract method through template matching based on the characteristics of the external contour of the seal image in Chinese Painting and Calligraphy. By analyzing the characteristics of the seal edge, we obtain the priori knowledge of the seal edge, and set up the outline template of the seals, then design a template matching method by computing the distance difference between the outline template and the seal image edge which can extract seal image from Chinese Painting and Calligraphy effectively. This method is proved to have higher extraction rate by experiment results than the traditional image extract methods.

  19. DCT domain feature extraction scheme based on motor unit action potential of EMG signal for neuromuscular disease classification.

    PubMed

    Doulah, Abul Barkat Mollah Sayeed Ud; Fattah, Shaikh Anowarul; Zhu, Wei-Ping; Ahmad, M Omair

    2014-01-01

    A feature extraction scheme based on discrete cosine transform (DCT) of electromyography (EMG) signals is proposed for the classification of normal event and a neuromuscular disease, namely the amyotrophic lateral sclerosis. Instead of employing DCT directly on EMG data, it is employed on the motor unit action potentials (MUAPs) extracted from the EMG signal via a template matching-based decomposition technique. Unlike conventional MUAP-based methods, only one MUAP with maximum dynamic range is selected for DCT-based feature extraction. Magnitude and frequency values of a few high-energy DCT coefficients corresponding to the selected MUAP are used as the desired feature which not only reduces computational burden, but also offers better feature quality with high within-class compactness and between-class separation. For the purpose of classification, the K-nearest neighbourhood classifier is employed. Extensive analysis is performed on clinical EMG database and it is found that the proposed method provides a very satisfactory performance in terms of specificity, sensitivity and overall classification accuracy.

  20. DCT domain feature extraction scheme based on motor unit action potential of EMG signal for neuromuscular disease classification

    PubMed Central

    Doulah, Abul Barkat Mollah Sayeed Ud; Zhu, Wei-Ping; Ahmad, M. Omair

    2014-01-01

    A feature extraction scheme based on discrete cosine transform (DCT) of electromyography (EMG) signals is proposed for the classification of normal event and a neuromuscular disease, namely the amyotrophic lateral sclerosis. Instead of employing DCT directly on EMG data, it is employed on the motor unit action potentials (MUAPs) extracted from the EMG signal via a template matching-based decomposition technique. Unlike conventional MUAP-based methods, only one MUAP with maximum dynamic range is selected for DCT-based feature extraction. Magnitude and frequency values of a few high-energy DCT coefficients corresponding to the selected MUAP are used as the desired feature which not only reduces computational burden, but also offers better feature quality with high within-class compactness and between-class separation. For the purpose of classification, the K-nearest neighbourhood classifier is employed. Extensive analysis is performed on clinical EMG database and it is found that the proposed method provides a very satisfactory performance in terms of specificity, sensitivity and overall classification accuracy. PMID:26609372

  1. Neural representations for the generation of inventive conceptions inspired by adaptive feature optimization of biological species.

    PubMed

    Zhang, Hao; Liu, Jia; Zhang, Qinglin

    2014-01-01

    Inventive conceptions amount to creative ideas for designing devices that are both original and useful. The generation of inventive conceptions is a key element of the inventive process. However, neural mechanisms of the inventive process remain poorly understood. Here we employed functional feature association tasks and event-related functional magnetic resonance imaging (MRI) to investigate neural substrates for the generation of inventive conceptions. The functional MRI (fMRI) data revealed significant activations at Brodmann area (BA) 47 in the left inferior frontal gyrus and at BA 18 in the left lingual gyrus, when participants performed biological functional feature association tasks compared with non-biological functional feature association tasks. Our results suggest that the left inferior frontal gyrus (BA 47) is associated with novelty-based representations formed by the generation and selection of semantic relatedness, and the left lingual gyrus (BA 18) is involved in relevant visual imagery in processing of semantic relatedness. The findings might shed light on neural mechanisms underlying the inventive process. PMID:23582377

  2. Fitting It All In: Adapting a Green Chemistry Extraction Experiment for Inclusion in an Undergraduate Analytical Laboratory

    ERIC Educational Resources Information Center

    Buckley, Heather L.; Beck, Annelise R.; Mulvihill, Martin J.; Douskey, Michelle C.

    2013-01-01

    Several principles of green chemistry are introduced through this experiment designed for use in the undergraduate analytical chemistry laboratory. An established experiment of liquid CO2 extraction of D-limonene has been adapted to include a quantitative analysis by gas chromatography. This facilitates drop-in incorporation of an exciting…

  3. Distinguishing Conjoint and Independent Neural Tuning for Stimulus Features With fMRI Adaptation

    PubMed Central

    Drucker, Daniel M.; Kerr, Wesley Thomas; Aguirre, Geoffrey Karl

    2009-01-01

    A central focus of cognitive neuroscience is identification of the neural codes that represent stimulus dimensions. One common theme is the study of whether dimensions, such as color and shape, are encoded independently by separate pools of neurons or are represented by neurons conjointly tuned for both properties. We describe an application of functional magnetic resonance imaging (fMRI) adaptation to distinguish between independent and conjoint neural representations of dimensions by examining the neural signal evoked by changes in one versus two stimulus dimensions and considering the metric of two-dimension additivity. We describe how a continuous carry-over paradigm may be used to efficiently estimate this metric. The assumptions of the method are examined as are optimizations. Finally, we demonstrate that the method produces the expected result for fMRI data collected from ventral occipitotemporal cortex while subjects viewed sets of shapes predicted to be represented by conjoint or independent neural tuning. PMID:19357342

  4. Copy Number Variation and Transposable Elements Feature in Recent, Ongoing Adaptation at the Cyp6g1 Locus

    PubMed Central

    Schmidt, Joshua M.; Good, Robert T.; Appleton, Belinda; Sherrard, Jayne; Raymant, Greta C.; Bogwitz, Michael R.; Martin, Jon; Daborn, Phillip J.; Goddard, Mike E.; Batterham, Philip; Robin, Charles

    2010-01-01

    The increased transcription of the Cyp6g1 gene of Drosophila melanogaster, and consequent resistance to insecticides such as DDT, is a widely cited example of adaptation mediated by cis-regulatory change. A fragment of an Accord transposable element inserted upstream of the Cyp6g1 gene is causally associated with resistance and has spread to high frequencies in populations around the world since the 1940s. Here we report the existence of a natural allelic series at this locus of D. melanogaster, involving copy number variation of Cyp6g1, and two additional transposable element insertions (a P and an HMS-Beagle). We provide evidence that this genetic variation underpins phenotypic variation, as the more derived the allele, the greater the level of DDT resistance. Tracking the spatial and temporal patterns of allele frequency changes indicates that the multiple steps of the allelic series are adaptive. Further, a DDT association study shows that the most resistant allele, Cyp6g1-[BP], is greatly enriched in the top 5% of the phenotypic distribution and accounts for ∼16% of the underlying phenotypic variation in resistance to DDT. In contrast, copy number variation for another candidate resistance gene, Cyp12d1, is not associated with resistance. Thus the Cyp6g1 locus is a major contributor to DDT resistance in field populations, and evolution at this locus features multiple adaptive steps occurring in rapid succession. PMID:20585622

  5. Adaptive deployment of spatial and feature-based attention before saccades

    PubMed Central

    White, Alex L.; Rolfs, Martin; Carrasco, Marisa

    2012-01-01

    What you see depends not only on where you are looking but also on where you will look next. The pre-saccadic attention shift is an automatic enhancement of visual sensitivity at the target of the next saccade. We investigated whether and how perceptual factors independent of the oculomotor plan modulate pre-saccadic attention within and across trials. Observers made saccades to one (the target) of six patches of moving dots and discriminated a brief luminance pulse (the probe) that appeared at an unpredictable location. Sensitivity to the probe was always higher at the target’s location (spatial attention), and this attention effect was stronger if the previous probe appeared at the previous target’s location. Furthermore, sensitivity was higher for probes moving in directions similar to the target’s direction (feature-based attention), but only when the previous probe moved in the same direction as the previous target. Therefore, implicit cognitive processes permeate pre-saccadic attention, so that–contingent on recent experience–it flexibly distributes resources to potentially relevant locations and features. PMID:23147690

  6. Constructing a Nonnegative Low-Rank and Sparse Graph With Data-Adaptive Features.

    PubMed

    Zhuang, Liansheng; Gao, Shenghua; Tang, Jinhui; Wang, Jingjing; Lin, Zhouchen; Ma, Yi; Yu, Nenghai

    2015-11-01

    This paper aims at constructing a good graph to discover the intrinsic data structures under a semisupervised learning setting. First, we propose to build a nonnegative low-rank and sparse (referred to as NNLRS) graph for the given data representation. In particular, the weights of edges in the graph are obtained by seeking a nonnegative low-rank and sparse reconstruction coefficients matrix that represents each data sample as a linear combination of others. The so-obtained NNLRS-graph captures both the global mixture of subspaces structure (by the low-rankness) and the locally linear structure (by the sparseness) of the data, hence it is both generative and discriminative. Second, as good features are extremely important for constructing a good graph, we propose to learn the data embedding matrix and construct the graph simultaneously within one framework, which is termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive NNLRS experiments on three publicly available data sets demonstrate that the proposed method outperforms the state-of-the-art graph construction method by a large margin for both semisupervised classification and discriminative analysis, which verifies the effectiveness of our proposed method.

  7. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  8. Testing the Self-Similarity Exponent to Feature Extraction in Motor Imagery Based Brain Computer Interface Systems

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Sánchez-Granero, Miguel Ángel; García-Laencina, Pedro J.; Fernández-Martínez, Manuel; Serna, José; Roca-Dorda, Joaquín

    2015-12-01

    A Brain Computer Interface (BCI) system is a tool not requiring any muscle action to transmit information. Acquisition, preprocessing, feature extraction (FE), and classification of electroencephalograph (EEG) signals constitute the main steps of a motor imagery BCI. Among them, FE becomes crucial for BCI, since the underlying EEG knowledge must be properly extracted into a feature vector. Linear approaches have been widely applied to FE in BCI, whereas nonlinear tools are not so common in literature. Thus, the main goal of this paper is to check whether some Hurst exponent and fractal dimension based estimators become valid indicators to FE in motor imagery BCI. The final results obtained were not optimal as expected, which may be due to the fact that the nature of the analyzed EEG signals in these motor imagery tasks were not self-similar enough.

  9. Novel Folded-PCA for improved feature extraction and data reduction with hyperspectral imaging and SAR in remote sensing

    NASA Astrophysics Data System (ADS)

    Zabalza, Jaime; Ren, Jinchang; Yang, Mingqiang; Zhang, Yi; Wang, Jun; Marshall, Stephen; Han, Junwei

    2014-07-01

    As a widely used approach for feature extraction and data reduction, Principal Components Analysis (PCA) suffers from high computational cost, large memory requirement and low efficacy in dealing with large dimensional datasets such as Hyperspectral Imaging (HSI). Consequently, a novel Folded-PCA is proposed, where the spectral vector is folded into a matrix to allow the covariance matrix to be determined more efficiently. With this matrix-based representation, both global and local structures are extracted to provide additional information for data classification. Moreover, both the computational cost and the memory requirement have been significantly reduced. Using Support Vector Machine (SVM) for classification on two well-known HSI datasets and one Synthetic Aperture Radar (SAR) dataset in remote sensing, quantitative results are generated for objective evaluations. Comprehensive results have indicated that the proposed Folded-PCA approach not only outperforms the conventional PCA but also the baseline approach where the whole feature sets are used.

  10. A Novel Feature Extraction Approach Using Window Function Capturing and QPSO-SVM for Enhancing Electronic Nose Performance

    PubMed Central

    Guo, Xiuzhen; Peng, Chao; Zhang, Songlin; Yan, Jia; Duan, Shukai; Wang, Lidan; Jia, Pengfei; Tian, Fengchun

    2015-01-01

    In this paper, a novel feature extraction approach which can be referred to as moving window function capturing (MWFC) has been proposed to analyze signals of an electronic nose (E-nose) used for detecting types of infectious pathogens in rat wounds. Meanwhile, a quantum-behaved particle swarm optimization (QPSO) algorithm is implemented in conjunction with support vector machine (SVM) for realizing a synchronization optimization of the sensor array and SVM model parameters. The results prove the efficacy of the proposed method for E-nose feature extraction, which can lead to a higher classification accuracy rate compared to other established techniques. Meanwhile it is interesting to note that different classification results can be obtained by changing the types, widths or positions of windows. By selecting the optimum window function for the sensor response, the performance of an E-nose can be enhanced. PMID:26131672

  11. Effects of Combined Creatine Plus Fenugreek Extract vs. Creatine Plus Carbohydrate Supplementation on Resistance Training Adaptations

    PubMed Central

    Taylor, Lem; Poole, Chris; Pena, Earnest; Lewing, Morgan; Kreider, Richard; Foster, Cliffa; Wilborn, Colin

    2011-01-01

    The purpose of this study was to evaluate the effects of combined creatine and fenugreek extract supplementation on strength and body composition. Forty- seven resistance trained men were matched according to body weight to ingest either 70 g of a dextrose placebo (PL), 5 g creatine/70 g of dextrose (CRD) or 3.5 g creatine/900 mg fenugreek extract (CRF) and participate in a 4-d/wk periodized resistance-training program for 8-weeks. At 0, 4, and 8-weeks, subjects were tested on body composition, muscular strength and endurance, and anaerobic capacity. Statistical analyses utilized a separate 3X3 (condition [PL vs. CRD vs. CRF] x time [T1 vs. T2 vs. T3]) ANOVAs with repeated measures for all criterion variables (p ≤ 0.05). No group x time interaction effects or main effects (p > 0.05) were observed for any measures of body composition. CRF group showed significant increases in lean mass at T2 (p = 0.001) and T3 (p = 0.001). Bench press 1RM increased in PL group (p = 0.050) from T1-T3 and in CRD from T1-T2 (p = 0. 001) while remaining significant at T3 (p < 0.001). CRF group showed a significant increase in bench press 1RM from T1-T2 (p < 0.001), and also increased from T2-T3 (p = 0.032). Leg press 1RM significantly increased at all time points for PL, CRD, and CRF groups (p < 0.05). No additional between or within group changes were observed for any performance variables and serum clinical safety profiles (p > 0.05). In conclusion, creatine plus fenugreek extract supplementation had a significant impact on upper body strength and body composition as effectively as the combination of 5g of creatine with 70g of dextrose. Thus, the use of fenugreek with creatine supplementation may be an effective means for enhancing creatine uptake while eliminating the need for excessive amounts of simple carbohydrates. Key points Fenugreek plus creatine supplementation may be a new means of increasing creatine uptake. Creatine plus fenugreek seems to be just as effective as the

  12. Ultrasound Color Doppler Image Segmentation and Feature Extraction in MCP and Wrist Region in Evaluation of Rheumatoid Arthritis.

    PubMed

    Snekhalatha, U; Muthubhairavi, V; Anburajan, M; Gupta, Neelkanth

    2016-09-01

    The present study focuses on automatically to segment the blood flow pattern of color Doppler ultrasound in hand region of rheumatoid arthritis patients and to correlate the extracted the statistical features and color Doppler parameters with standard parameters. Thirty patients with rheumatoid arthritis (RA) and their total of 300 joints of both the hands, i.e., 240 MCP and 60 wrists were examined in this study. Ultrasound color Doppler of both the hands of all the patients was obtained. Automated segmentation of color Doppler image was performed using color enhancement scaling based segmentation algorithm. The region of interest is fixed in the MCP joints and wrist of the hand. Features were extracted from the defined ROI of the segmented output image. The color fraction was measured using Mimics software. The standard parameters such as HAQ score, DAS 28 score, and ESR was obtained for all the patients. The color fraction tends to be increased in wrist and MCP3 joints which indicate the increased blood flow pattern and color Doppler activity as part of inflammation in hand joints of RA. The ESR correlated significantly with the feature extracted parameters such as mean, standard deviation and entropy in MCP3, MCP4 joint and the wrist region. The developed automated color image segmentation algorithm provides a quantitative analysis for diagnosis and assessment of RA. The correlation study between the color Doppler parameters with the standard parameters provides moral significance in quantitative analysis of RA in MCP3 joint and the wrist region.

  13. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  14. Identification of cancerous gastric cells based on common features extracted from hyperspectral microscopic images

    PubMed Central

    Zhu, Siqi; Su, Kang; Liu, Yumeng; Yin, Hao; Li, Zhen; Huang, Furong; Chen, Zhenqiang; Chen, Weidong; Zhang, Ge; Chen, Yihong

    2015-01-01

    We construct a microscopic hyperspectral imaging system to distinguish between normal and cancerous gastric cells. We study common transmission-spectra features that only emerge when the samples are dyed with hematoxylin and eosin (H&E) stain. Subsequently, we classify the obtained visible-range transmission spectra of the samples into three zones. Distinct features are observed in the spectral responses between the normal and cancerous cell nuclei in each zone, which depend on the pH level of the cell nucleus. Cancerous gastric cells are precisely identified according to these features. The average cancer-cell identification accuracy obtained with a backpropagation algorithm program trained with these features is 95%. PMID:25909000

  15. A procedure for the extraction of airglow features in the presence of strong background radiation

    NASA Astrophysics Data System (ADS)

    Swift, W. R.; Torr, D. G.; Hamilton, C.; Dougani, H.; Torr, M. R.

    1990-09-01

    A technique is developed that can be used to derive the total intensity of band emissions from twilight airglow measurements when the basic spectral signature of the band to be considered is known. The method is designed to automatically extract total band or line intensities of a signal imbedded in background radiation several orders of magnitude greater in brightness. It is shown that the technique developed can reliably measure the intensity of both weak and strong band and line emissions in the presence of strong twilight background radiation. The method of extraction is shown as part of a general purpose spectral analysis program written in VAX FORTRAN. This extraction procedure has been used successfully on emissions of Fel, Ca(+), N2(+) (1N) (0-0) and (0-1), OH in the near UV. OI red (630nm) and green (558nm) lines in the visible, and the OH Meinel bands and O(+) (2P) 732 nm in the near IR.

  16. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    NASA Astrophysics Data System (ADS)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  17. Designing a robust feature extraction method based on optimum allocation and principal component analysis for epileptic EEG signal classification.

    PubMed

    Siuly, Siuly; Li, Yan

    2015-04-01

    The aim of this study is to design a robust feature extraction method for the classification of multiclass EEG signals to determine valuable features from original epileptic EEG data and to discover an efficient classifier for the features. An optimum allocation based principal component analysis method named as OA_PCA is developed for the feature extraction from epileptic EEG data. As EEG data from different channels are correlated and huge in number, the optimum allocation (OA) scheme is used to discover the most favorable representatives with minimal variability from a large number of EEG data. The principal component analysis (PCA) is applied to construct uncorrelated components and also to reduce the dimensionality of the OA samples for an enhanced recognition. In order to choose a suitable classifier for the OA_PCA feature set, four popular classifiers: least square support vector machine (LS-SVM), naive bayes classifier (NB), k-nearest neighbor algorithm (KNN), and linear discriminant analysis (LDA) are applied and tested. Furthermore, our approaches are also compared with some recent research work. The experimental results show that the LS-SVM_1v1 approach yields 100% of the overall classification accuracy (OCA), improving up to 7.10% over the existing algorithms for the epileptic EEG data. The major finding of this research is that the LS-SVM with the 1v1 system is the best technique for the OA_PCA features in the epileptic EEG signal classification that outperforms all the recent reported existing methods in the literature.

  18. Cold Adaptation, Ca2+ Dependency and Autolytic Stability Are Related Features in a Highly Active Cold-Adapted Trypsin Resistant to Autoproteolysis Engineered for Biotechnological Applications

    PubMed Central

    Olivera-Nappa, Alvaro; Reyes, Fernando; Andrews, Barbara A.; Asenjo, Juan A.

    2013-01-01

    Pig trypsin is routinely used as a biotechnological tool, due to its high specificity and ability to be stored as an inactive stable zymogen. However, it is not an optimum enzyme for conditions found in wound debriding for medical uses and trypsinization processes for protein analysis and animal cell culturing, where low Ca2+ dependency, high activity in mild conditions and easy inactivation are crucial. We isolated and thermodynamically characterized a highly active cold-adapted trypsin for medical and laboratory use that is four times more active than pig trypsin at 10° C and at least 50% more active than pig trypsin up to 50° C. Contrary to pig trypsin, this enzyme has a broad optimum pH between 7 and 10 and is very insensitive to Ca2+ concentration. The enzyme is only distantly related to previously described cryophilic trypsins. We built and studied molecular structure models of this trypsin and performed molecular dynamic calculations. Key residues and structures associated with calcium dependency and cryophilicity were identified. Experiments indicated that the protein is unstable and susceptible to autoproteolysis. Correlating experimental results and structural predictions, we designed mutations to improve the resistance to autoproteolysis and conserve activity for longer periods after activation. One single mutation provided around 25 times more proteolytic stability. Due to its cryophilic nature, this trypsin is easily inactivated by mild denaturation conditions, which is ideal for controlled proteolysis processes without requiring inhibitors or dilution. We clearly show that cold adaptation, Ca2+ dependency and autolytic stability in trypsins are related phenomena that are linked to shared structural features and evolve in a concerted fashion. Hence, both structurally and evolutionarily they cannot be interpreted and studied separately as previously done. PMID:23951314

  19. Feature Extraction of PDV Challenge Data Set A with Digital Down Shift (DDS)

    SciTech Connect

    Tunnell, T. W.

    2012-10-18

    This slide-show is about data analysis in photonic Doppler velocimetry. The digital down shift subtracts a specified velocity (frequency) from all components in the Fourier frequency domain and generates both the down shifted in-phase and out-of-phase waveforms so that phase and displacement can be extracted through a continuous unfold of the arctangent.

  20. Use of Landsat-derived temporal profiles for corn-soybean feature extraction and classification

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Carnes, J. G.; Austin, W. W.

    1982-01-01

    A physical model is presented, which has been derived from multitemporal-multispectral data acquired by Landsat satellites to describe the behavior and new features that are crop specific. A feasibility study over 40 sites was performed to classify the segment pixels into those of corn, soybeans, and others using the new features and a linear classifier. Results agree well with other existing methods, and it is shown the multitemporal-multispectral scanner data can be transformed into two parameters that are closely related to the target of interest and thus can be used in classification. The approach is less time intensive than other techniques and requires labeling of only pure pixels.

  1. [Adaptation of a sensitive DNA extraction method for detection of Entamoeba histolytica by real-time polymerase chain reaction].

    PubMed

    Pınar, Ahmet; Akyön, Yakut; Alp, Alpaslan; Ergüven, Sibel

    2010-07-01

    This study was aimed to adapt a sensitive DNA extraction protocol in stool samples for real-time polymerase chain reaction (PCR) detection of Entamoeba histolytica which causes important morbidity and mortality worldwide. Stool extraction is a problematic step and has direct effects on PCR sensitivity. In order to improve the sensitivity of E.histolytica detection by real-time PCR, "QIAamp DNA stool minikit (Qiagen, Germany)" was modified by adding an overnight incubation step with proteinase K and sodium dodecyl sulfate (SDS) in this study. Three different extraction methods [(1) original method, (2) cetyltrimethyl-ammonium bromide (CTAB) method, (3) modified method] were evaluated for effects on sensitivity in real-time quantitative PCR (Artus RealArt TM E.histolytica RG PCR Kit, Qiagen Diagnostics, Germany). For this purpose, several concentrations of standard E.histolytica DNA were spiked in parasite-free stool samples and three different extraction protocols were performed. Detection sensitivities of "QIAamp DNA stool minikit" was found 5000 copies/ml and of CTAB method was found 500 copies/ml. Detection sensitivity of the extraction was improved to 5 copies/mL by modified "QIAamp DNA stool minikit" protocol. Since detection sensitivities of nucleic acid extraction protocols from stool samples directly affect the sensitivity of PCR amplification, different extraction protocols for different microorganisms should be evaluated.

  2. The Research of Feature Extraction Method of Liver Pathological Image Based on Multispatial Mapping and Statistical Properties

    PubMed Central

    Liu, Huiling; Xia, Bingbing; Yi, Dehui

    2016-01-01

    We propose a new feature extraction method of liver pathological image based on multispatial mapping and statistical properties. For liver pathological images of Hematein Eosin staining, the image of R and B channels can reflect the sensitivity of liver pathological images better, while the entropy space and Local Binary Pattern (LBP) space can reflect the texture features of the image better. To obtain the more comprehensive information, we map liver pathological images to the entropy space, LBP space, R space, and B space. The traditional Higher Order Local Autocorrelation Coefficients (HLAC) cannot reflect the overall information of the image, so we propose an average correction HLAC feature. We calculate the statistical properties and the average gray value of pathological images and then update the current pixel value as the absolute value of the difference between the current pixel gray value and the average gray value, which can be more sensitive to the gray value changes of pathological images. Lastly the HLAC template is used to calculate the features of the updated image. The experiment results show that the improved features of the multispatial mapping have the better classification performance for the liver cancer. PMID:27022407

  3. Automated extraction of absorption features from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Geophysical and Environmental Research Imaging Spectrometer (GERIS) data

    NASA Technical Reports Server (NTRS)

    Kruse, Fred A.; Calvin, Wendy M.; Seznec, Olivier

    1988-01-01

    Automated techniques were developed for the extraction and characterization of absorption features from reflectance spectra. The absorption feature extraction algorithms were successfully tested on laboratory, field, and aircraft imaging spectrometer data. A suite of laboratory spectra of the most common minerals was analyzed and absorption band characteristics tabulated. A prototype expert system was designed, implemented, and successfully tested to allow identification of minerals based on the extracted absorption band characteristics. AVIRIS spectra for a site in the northern Grapevine Mountains, Nevada, have been characterized and the minerals sericite (fine grained muscovite) and dolomite were identified. The minerals kaolinite, alunite, and buddingtonite were identified and mapped for a site at Cuprite, Nevada, using the feature extraction algorithms on the new Geophysical and Environmental Research 64 channel imaging spectrometer (GERIS) data. The feature extraction routines (written in FORTRAN and C) were interfaced to the expert system (written in PROLOG) to allow both efficient processing of numerical data and logical spectrum analysis.

  4. Unsupervised clustering analyses of features extraction for a caries computer-assisted diagnosis using dental fluorescence images

    NASA Astrophysics Data System (ADS)

    Bessani, Michel; da Costa, Mardoqueu M.; Lins, Emery C. C. C.; Maciel, Carlos D.

    2014-02-01

    Computer-assisted diagnoses (CAD) are performed by systems with embedded knowledge. These systems work as a second opinion to the physician and use patient data to infer diagnoses for health problems. Caries is the most common oral disease and directly affects both individuals and the society. Here we propose the use of dental fluorescence images as input of a caries computer-assisted diagnosis. We use texture descriptors together with statistical pattern recognition techniques to measure the descriptors performance for the caries classification task. The data set consists of 64 fluorescence images of in vitro healthy and carious teeth including different surfaces and lesions already diagnosed by an expert. The texture feature extraction was performed on fluorescence images using RGB and YCbCr color spaces, which generated 35 different descriptors for each sample. Principal components analysis was performed for the data interpretation and dimensionality reduction. Finally, unsupervised clustering was employed for the analysis of the relation between the output labeling and the diagnosis of the expert. The PCA result showed a high correlation between the extracted features; seven components were sufficient to represent 91.9% of the original feature vectors information. The unsupervised clustering output was compared with the expert classification resulting in an accuracy of 96.88%. The results show the high accuracy of the proposed approach in identifying carious and non-carious teeth. Therefore, the development of a CAD system for caries using such an approach appears to be promising.

  5. Improving the detection of wind fields from LIDAR aerosol backscatter using feature extraction

    NASA Astrophysics Data System (ADS)

    Bickel, Brady R.; Rotthoff, Eric R.; Walters, Gage S.; Kane, Timothy J.; Mayor, Shane D.

    2016-04-01

    The tracking of winds and atmospheric features has many applications, from predicting and analyzing weather patterns in the upper and lower atmosphere to monitoring air movement from pig and chicken farms. Doppler LIDAR systems exist to quantify the underlying wind speeds, but cost of these systems can sometimes be relatively high, and processing limitations exist. The alternative is using an incoherent LIDAR system to analyze aerosol backscatter. Improving the detection and analysis of wind information from aerosol backscatter LIDAR systems will allow for the adoption of these relatively low cost instruments in environments where the size, complexity, and cost of other options are prohibitive. Using data from a simple aerosol backscatter LIDAR system, we attempt to extend the processing capabilities by calculating wind vectors through image correlation techniques to improve the detection of wind features.

  6. An ultra low power feature extraction and classification system for wearable seizure detection.

    PubMed

    Page, Adam; Pramod Tim Oates, Siddharth; Mohsenin, Tinoosh

    2015-08-01

    In this paper we explore the use of a variety of machine learning algorithms for designing a reliable and low-power, multi-channel EEG feature extractor and classifier for predicting seizures from electroencephalographic data (scalp EEG). Different machine learning classifiers including k-nearest neighbor, support vector machines, naïve Bayes, logistic regression, and neural networks are explored with the goal of maximizing detection accuracy while minimizing power, area, and latency. The input to each machine learning classifier is a 198 feature vector containing 9 features for each of the 22 EEG channels obtained over 1-second windows. All classifiers were able to obtain F1 scores over 80% and onset sensitivity of 100% when tested on 10 patients. Among five different classifiers that were explored, logistic regression (LR) proved to have minimum hardware complexity while providing average F-1 score of 91%. Both ASIC and FPGA implementations of logistic regression are presented and show the smallest area, power consumption, and the lowest latency when compared to the previous work. PMID:26737931

  7. Fourier-based shape feature extraction technique for computer-aided B-Mode ultrasound diagnosis of breast tumor.

    PubMed

    Lee, Jong-Ha; Seong, Yeong Kyeong; Chang, Chu-Ho; Park, Jinman; Park, Moonho; Woo, Kyoung-Gu; Ko, Eun Young

    2012-01-01

    Early detection of breast tumor is critical in determining the best possible treatment approach. Due to its superiority compared with mammography in its possibility to detect lesions in dense breast tissue, ultrasound imaging has become an important modality in breast tumor detection and classification. This paper discusses the novel Fourier-based shape feature extraction techniques that provide enhanced classification accuracy for breast tumor in the computer-aided B-mode ultrasound diagnosis system. To demonstrate the effectiveness of the proposed method, experiments were performed using 4,107 ultrasound images with 2,508 malignancy cases. Experimental results show that the breast tumor classification accuracy of the proposed technique was 15.8%, 5.43%, 17.32%, and 13.86% higher than the previous shape features such as number of protuberances, number of depressions, lobulation index, and dissimilarity, respectively. PMID:23367430

  8. Fourier-based shape feature extraction technique for computer-aided B-Mode ultrasound diagnosis of breast tumor.

    PubMed

    Lee, Jong-Ha; Seong, Yeong Kyeong; Chang, Chu-Ho; Park, Jinman; Park, Moonho; Woo, Kyoung-Gu; Ko, Eun Young

    2012-01-01

    Early detection of breast tumor is critical in determining the best possible treatment approach. Due to its superiority compared with mammography in its possibility to detect lesions in dense breast tissue, ultrasound imaging has become an important modality in breast tumor detection and classification. This paper discusses the novel Fourier-based shape feature extraction techniques that provide enhanced classification accuracy for breast tumor in the computer-aided B-mode ultrasound diagnosis system. To demonstrate the effectiveness of the proposed method, experiments were performed using 4,107 ultrasound images with 2,508 malignancy cases. Experimental results show that the breast tumor classification accuracy of the proposed technique was 15.8%, 5.43%, 17.32%, and 13.86% higher than the previous shape features such as number of protuberances, number of depressions, lobulation index, and dissimilarity, respectively.

  9. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    NASA Astrophysics Data System (ADS)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  10. Sharp mandibular bone irregularities after lower third molar extraction: Incidence, clinical features and risk factors

    PubMed Central

    Alves-Pereira, Daniela; Valmaseda-Castellón, Eduard; Laskin, Daniel M.; Berini-Aytés, Leonardo; Gay-Escoda, Cosme

    2013-01-01

    Objectives: The purpose of this study was to determine the incidence and clinical symptoms associated with sharp mandibular bone irregularities (SMBI) after lower third molar extraction and to identify possible risk factors for this complication. Study Design: A mixed study design was used. A retrospective cohort study of 1432 lower third molar extractions was done to determine the incidence of SMBI and a retrospective case-control study was done to determine potential demographic and etiologic factors by comparing those patients with postoperative SMBI with controls. Results: Twelve SMBI were found (0.84%). Age was the most important risk factor for this complication. The operated side and the presence of an associated radiolucent image were also significantly related to the development of mandibular bone irregularities. The depth of impaction of the tooth might also be an important factor since erupted or nearly erupted third molars were more frequent in the SMBI group. Conclusions: SMBI are a rare postoperative complication after lower third molar removal. Older patients having left side lower third molars removed are more likely to develop this problem. The treatment should be the removal of the irregularity when the patient is symptomatic. Key words:Third molar, postoperative complication, bone irregularities, age. PMID:23524429

  11. Combining Spectral and Texture Features Using Random Forest Algorithm: Extracting Impervious Surface Area in Wuhan

    NASA Astrophysics Data System (ADS)

    Shao, Zhenfeng; Zhang, Yuan; Zhang, Lei; Song, Yang; Peng, Minjun

    2016-06-01

    Impervious surface area (ISA) is one of the most important indicators of urban environments. At present, based on multi-resolution remote sensing images, numerous approaches have been proposed to extract impervious surface, using statistical estimation, sub-pixel classification and spectral mixture analysis method of sub-pixel analysis. Through these methods, impervious surfaces can be effectively applied to regional-scale planning and management. However, for the large scale region, high resolution remote sensing images can provide more details, and therefore they will be more conducive to analysis environmental monitoring and urban management. Since the purpose of this study is to map impervious surfaces more effectively, three classification algorithms (random forests, decision trees, and artificial neural networks) were tested for their ability to map impervious surface. Random forests outperformed the decision trees, and artificial neural networks in precision. Combining the spectral indices and texture, random forests is applied to impervious surface extraction with a producer's accuracy of 0.98, a user's accuracy of 0.97, and an overall accuracy of 0.98 and a kappa coefficient of 0.97.

  12. Spectral Morphology for Feature Extraction from Multi- and Hyper-spectral Imagery.

    SciTech Connect

    Harvey, N. R.; Porter, R. B.

    2005-01-01

    For accurate and robust analysis of remotely-sensed imagery it is necessary to combine the information from both spectral and spatial domains in a meaningful manner. The two domains are intimately linked: objects in a scene are defined in terms of both their composition and their spatial arrangement, and cannot accurately be described by information from either of these two domains on their own. To date there have been relatively few methods for combining spectral and spatial information concurrently. Most techniques involve separate processing for extracting spatial and spectral information. In this paper we will describe several extensions to traditional morphological operators that can treat spectral and spatial domains concurrently and can be used to extract relationships between these domains in a meaningful way. This includes the investgation and development of suitable vector-ordering metrics and machine-learning-based techniques for optimizing the various parameters of the morphological operators, such as morphological operator, structuring element and vector ordering metric. We demonstrate their application to a range of multi- and hyper-spectral image analysis problems.

  13. A Bayes optimal matrix-variate LDA for extraction of spatio-spectral features from EEG signals.

    PubMed

    Mahanta, Mohammad Shahin; Aghaei, Amirhossein S; Plataniotis, Konstantinos N

    2012-01-01

    Classification of mental states from electroencephalogram (EEG) signals is used for many applications in areas such as brain-computer interfacing (BCI). When represented in the frequency domain, the multichannel EEG signal can be considered as a two-directional spatio-spectral data of high dimensionality. Extraction of salient features using feature extractors such as the commonly used linear discriminant analysis (LDA) is an essential step for the classification of these signals. However, multichannel EEG is naturally in matrix-variate format, while LDA and other traditional feature extractors are designed for vector-variate input. Consequently, these methods require a prior vectorization of the EEG signals, which ignores the inherent matrix-variate structure in the data and leads to high computational complexity. A matrix-variate formulation of LDA have previously been proposed. However, this heuristic formulation does not provide the Bayes optimality benefits of LDA. The current paper proposes a Bayes optimal matrix-variate formulation of LDA based on a matrix-variate model for the spatio-spectral EEG patterns. The proposed formulation also provides a strategy to select the most significant features among the different rows and columns.

  14. Application of the empirical mode decomposition to the extraction of features from EEG signals for mental task classification.

    PubMed

    Diez, Pablo F; Mut, Vicente; Laciar, Eric; Torres, Abel; Avila, Enrique

    2009-01-01

    In this work, it is proposed a technique for the feature extraction of electroencephalographic (EEG) signals for classification of mental tasks which is an important part in the development of Brain Computer Interfaces (BCI). The Empirical Mode Decomposition (EMD) is a method capable to process nonstationary and nonlinear signals as the EEG. This technique was applied in EEG signals of 7 subjects performing 5 mental tasks. For each mode obtained from the EMD and each EEG channel were computed six features: Root Mean Square (RMS), Variance, Shannon Entropy, Lempel-Ziv Complexity Value, and Central and Maximum Frequencies, obtaining a feature vector of 180 components. The Wilks' lambda parameter was applied for the selection of the most important variables reducing the dimensionality of the feature vector. The classification of mental tasks was performed using Linear Discriminate Analysis (LD) and Neural Networks (NN). With this method, the average classification over all subjects in database was 91+/-5% and 87+/-5% using LD and NN, respectively. It was concluded that the EMD allows getting better performances in the classification of mental tasks than the obtained with other traditional methods, like spectral analysis.

  15. Fatigue damage localization using time-domain features extracted from nonlinear Lamb waves

    NASA Astrophysics Data System (ADS)

    Hong, Ming; Su, Zhongqing; Lu, Ye; Cheng, Li

    2014-03-01

    Nonlinear guided waves are sensitive to small-scale fatigue damage that may hardly be identified by traditional techniques. A characterization method for fatigue damage is established based on nonlinear Lamb waves in conjunction with the use of a piezoelectric sensor network. Theories on nonlinear Lamb waves for damage detection are first introduced briefly. Then, the ineffectiveness of using pure frequency-domain information of nonlinear wave signals for locating damage is discussed. With a revisit to traditional gross-damage localization techniques based on the time of flight, the idea of using temporal signal features of nonlinear Lamb waves to locate fatigue damage is introduced. This process involves a time-frequency analysis that enables the damage-induced nonlinear signal features, which are either undiscernible in the original time history or uninformative in the frequency spectrum, to be revealed. Subsequently, a finite element modeling technique is employed, accounting for various sources of nonlinearities in a fatigued medium. A piezoelectric sensor network is configured to actively generate and acquire probing Lamb waves that involve damageinduced nonlinear features. A probability-based diagnostic imaging algorithm is further proposed, presenting results in diagnostic images intuitively. The approach is experimentally verified on a fatigue-damaged aluminum plate, showing reasonably good accuracy. Compared to existing nonlinear ultrasonics-based inspection techniques, this approach uses a permanently attached sensor network that well accommodates automated online health monitoring; more significantly, it utilizes time-domain information of higher-order harmonics from time-frequency analysis, and demonstrates a great potential for quantitative characterization of small-scale damage with improved localization accuracy.

  16. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains.

    PubMed

    Gonçalves, Ariadne Barbosa; Souza, Junior Silva; Silva, Gercina Gonçalves da; Cereda, Marney Pascoli; Pott, Arnildo; Naka, Marco Hiroshi; Pistori, Hemerson

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  17. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains.

    PubMed

    Gonçalves, Ariadne Barbosa; Souza, Junior Silva; Silva, Gercina Gonçalves da; Cereda, Marney Pascoli; Pott, Arnildo; Naka, Marco Hiroshi; Pistori, Hemerson

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper.

  18. Feature Extraction and Machine Learning for the Classification of Brazilian Savannah Pollen Grains

    PubMed Central

    Souza, Junior Silva; da Silva, Gercina Gonçalves

    2016-01-01

    The classification of pollen species and types is an important task in many areas like forensic palynology, archaeological palynology and melissopalynology. This paper presents the first annotated image dataset for the Brazilian Savannah pollen types that can be used to train and test computer vision based automatic pollen classifiers. A first baseline human and computer performance for this dataset has been established using 805 pollen images of 23 pollen types. In order to access the computer performance, a combination of three feature extractors and four machine learning techniques has been implemented, fine tuned and tested. The results of these tests are also presented in this paper. PMID:27276196

  19. Structural features and in vivo antitussive activity of the water extracted polymer from Glycyrrhiza glabra.

    PubMed

    Saha, Sudipta; Nosál'ová, Gabriella; Ghosh, Debjani; Flešková, Dana; Capek, Peter; Ray, Bimalendu

    2011-05-01

    Antitussive drugs are amongst the most widely used medications worldwide; however no new class of drugs has been introduced into the market for many years. Herein, we have analyzed the water-extracted polymeric fraction (WE) of Glycyrrhiza glabra. This arabinogalactan protein enriched fraction, ≥ 85% of which gets precipitated with Yariv reagent, consisted mainly of 3- and 3,6-linked galactopyranosyl, and 5- and 3,5-linked arabinofuranosyl residues. Peroral administration of this polymer in a dose of 50mg/kg body weight decreases the number of citric acid induced cough efforts in guinea pigs more effectively than codeine. It does not induce significant change in the values of specific airway resistance or provoked any observable adverse effects.

  20. Morphological feature extraction for the classification of digital images of cancerous tissues.

    PubMed

    Thiran, J P; Macq, B

    1996-10-01

    This paper presents a new method for automatic recognition of cancerous tissues from an image of a microscopic section. Based on the shape and the size analysis of the observed cells, this method provides the physician with nonsubjective numerical values for four criteria of malignancy. This automatic approach is based on mathematical morphology, and more specifically on the use of Geodesy. This technique is used first to remove the background noise from the image and then to operate a segmentation of the nuclei of the cells and an analysis of their shape, their size, and their texture. From the values of the extracted criteria, an automatic classification of the image (cancerous or not) is finally operated.

  1. [Tensor Feature Extraction Using Multi-linear Principal Component Analysis for Brain Computer Interface].

    PubMed

    Wang, Jinjia; Yang, Liang

    2015-06-01

    The brain computer interface (BCI) can be used to control external devices directly through electroencephalogram (EEG) information. A multi-linear principal component analysis (MPCA) framework was used for the limitations of tensor form of multichannel EEG signals processing based on traditional principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA). Based on MPCA, we used the projection of tensor-matrix to achieve the goal of dimensionality reduction and features exaction. Then we used the Fisher linear classifier to classify the features. Furthermore, we used this novel method on the BCI competition II dataset 4 and BCI competition N dataset 3 in the experiment. The second-order tensor representation of time-space EEG data and the third-order tensor representation of time-space-frequency BEG data were used. The best results that were superior to those from other dimensionality reduction methods were obtained by much debugging on parameter P and testQ. For two-order tensor, the highest accuracy rates could be achieved as 81.0% and 40.1%, and for three-order tensor, the highest accuracy rates were 76.0% and 43.5%, respectively.

  2. Qualitative Features Extraction from Sensor Data using Short-time Fourier Transform

    NASA Technical Reports Server (NTRS)

    Amini, Abolfazl M.; Figueroa, Fernando

    2004-01-01

    The information gathered from sensors is used to determine the health of a sensor. Once a normal mode of operation is established any deviation from the normal behavior indicates a change. This change may be due to a malfunction of the sensor(s) or the system (or process). The step-up and step-down features, as well as sensor disturbances are assumed to be exponential. An RC network is used to model the main process, which is defined by a step-up (charging), drift, and step-down (discharging). The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The Short-Time Fourier Transform of the Signal is taken using the Hamming window. Three window widths are used. The DC value is removed from the windowed data prior to taking the FFT. The resulting three dimensional spectral plots provide good time frequency resolution. The results indicate distinct shapes corresponding to each process.

  3. Anthocyanin characterization, total phenolic quantification and antioxidant features of some Chilean edible berry extracts.

    PubMed

    Brito, Anghel; Areche, Carlos; Sepúlveda, Beatriz; Kennelly, Edward J; Simirgiotis, Mario J

    2014-01-01

    The anthocyanin composition and HPLC fingerprints of six small berries endemic of the VIII region of Chile were investigated using high resolution mass analysis for the first time (HR-ToF-ESI-MS). The antioxidant features of the six endemic species were compared, including a variety of blueberries which is one of the most commercially significant berry crops in Chile. The anthocyanin fingerprints obtained for the fruits were compared and correlated with the antioxidant features measured by the bleaching of the DPPH radical, the ferric reducing antioxidant power (FRAP), the superoxide anion scavenging activity assay (SA), and total content of phenolics, flavonoids and anthocyanins measured by spectroscopic methods. Thirty one anthocyanins were identified, and the major ones were quantified by HPLC-DAD, mostly branched 3-O-glycosides of delphinidin, cyanidin, petunidin, peonidin and malvidin. Three phenolic acids (feruloylquinic acid, chlorogenic acid, and neochlorogenic acid) and five flavonols (hyperoside, isoquercitrin, quercetin, rutin, myricetin and isorhamnetin) were also identified. Calafate fruits showed the highest antioxidant activity (2.33 ± 0.21 μg/mL in the DPPH assay), followed by blueberry (3.32 ± 0.18 μg/mL), and arrayán (5.88 ± 0.21), respectively. PMID:25072199

  4. Application of the interferometric synthetic aperture radar (IFSAR) correlation file for use in feature extraction

    NASA Astrophysics Data System (ADS)

    Simental, Edmundo; Guthrie, Verner

    2002-11-01

    Fine resolution synthetic aperture radar (SAR) and interferometric synthetic aperture radar (IFSAR) have been widely used for the purpose of creating viable terrain maps. A map is only as good as the information it contains. Therefore, it is a major priority of the mapmakers that the data that goes into the process be as complete and accurate as possible. In this paper, we analyze IFSAR correlation/de-correlation data to help in terrain feature information. The correlation data contains the correlation coefficient between the bottom and top IFSAR radar channels. It is a 32-bit floating-point number. This number is a measure of the absolute complex correlation coefficient between the signals that are received in each channel. The range of these numbers in between zero and unity. Unity indicates 100% correlation and zero indicates no correlation. The correlation is a function of several system parameters including signal-to-noise ratio (SNR), local geometry, and scattering mechanism. These two radar channels are physically close together and signals are inherently highly correlated. Significant difference is found beyond the fourth decimal place. We have concentrated our analysis on small features that are easily detectable in the correlation/de-correlation data and not so easily detectable in the elevation or magnitude data.

  5. Structured covariance principal component analysis for real-time onsite feature extraction and dimensionality reduction in hyperspectral imaging.

    PubMed

    Zabalza, Jaime; Ren, Jinchang; Ren, Jie; Liu, Zhe; Marshall, Stephen

    2014-07-10

    Presented in a three-dimensional structure called a hypercube, hyperspectral imaging suffers from a large volume of data and high computational cost for data analysis. To overcome such drawbacks, principal component analysis (PCA) has been widely applied for feature extraction and dimensionality reduction. However, a severe bottleneck is how to compute the PCA covariance matrix efficiently and avoid computational difficulties, especially when the spatial dimension of the hypercube is large. In this paper, structured covariance PCA (SC-PCA) is proposed for fast computation of the covariance matrix. In line with how spectral data is acquired in either the push-broom or tunable filter method, different implementation schemes of SC-PCA are presented. As the proposed SC-PCA can determine the covariance matrix from partial covariance matrices in parallel even without prior deduction of the mean vector, it facilitates real-time data analysis while the hypercube is acquired. This has significantly reduced the scale of required memory and also allows efficient onsite feature extraction and data reduction to benefit subsequent tasks in coding and compression, transmission, and analytics of hyperspectral data.

  6. Multi-channel EEG signal feature extraction and pattern recognition on horizontal mental imagination task of 1-D cursor movement for brain computer interface.

    PubMed

    Serdar Bascil, M; Tesneli, Ahmet Y; Temurtas, Feyzullah

    2015-06-01

    Brain computer interfaces (BCIs), based on multi-channel electroencephalogram (EEG) signal processing convert brain signal activities to machine control commands. It provides new communication way with a computer by extracting electroencephalographic activity. This paper, deals with feature extraction and classification of horizontal mental task pattern on 1-D cursor movement from EEG signals. The hemispherical power changes are computed and compared on alpha & beta frequencies and horizontal cursor control extracted with only mental imagination of cursor movements. In the first stage, features are extracted with the well-known average signal power or power difference (alpha and beta) method. Principal component analysis is used for reducing feature dimensions. All features are classified and the mental task patterns are recognized by three neural network classifiers which learning vector quantization, multilayer neural network and probabilistic neural network due to obtaining acceptable good results and using successfully in pattern recognition via k-fold cross validation technique.

  7. Hyperspectral Feature Detection Onboard the Earth Observing One Spacecraft using Superpixel Segmentation and Endmember Extraction

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Bornstein, Benjamin; Bue, Brian D.; Tran, Daniel Q.; Chien, Steve A.; Castano, Rebecca

    2012-01-01

    We present a demonstration of onboard hyperspectral image processing with the potential to reduce mission downlink requirements. The system detects spectral endmembers and then uses them to map units of surface material. This summarizes the content of the scene, reveals spectral anomalies warranting fast response, and reduces data volume by two orders of magnitude. We have integrated this system into the Autonomous Science craft Experiment for operational use onboard the Earth Observing One (EO-1) Spacecraft. The system does not require prior knowledge about spectra of interest. We report on a series of trial overflights in which identical spacecraft commands are effective for autonomous spectral discovery and mapping for varied target features, scenes and imaging conditions.

  8. Structural features and antitumor activity of a novel polysaccharide from alkaline extract of Phellinus linteus mycelia.

    PubMed

    Pei, Juan-Juan; Wang, Zhen-Bin; Ma, Hai-Le; Yan, Jing-Kun

    2015-01-22

    A novel high molecular weight polysaccharide (PL-N1) was isolated from alkaline extract of the cultured Phellinus linteus mycelia. The weight average molecular weight (Mw) of PL-N1 was estimated at 343,000kDa. PL-N1 comprised arabinose, xylose, glucose, and galactose in the molar ratio of 4.0:6.7:1.3:1.0. The chemical structure of PL-N1 was investigated by FTIR and NMR spectroscopies and methylation analysis. The results showed that the backbone of PL-N1 comprised (1→4)-linked β-D-xylopyranosyl residues, (1→2)-linked α-D-xylopyranosyl residues, (1→4)-linked α-D-glucopyranosyl residues, (1→5)-linked β-D-arabinofuranosyl residues, (1→4)-linked β-D-xylopyranosyl residues which branched at O-2, and (1→4)-linked β-D-galactopyranosyl residues which branched at O-6. The branches consisted of (1→)-linked α-D-arabinofuranosyl residues. Antitumor activity assay in vitro showed that PL-N1 could inhibit the growth of HepG2 cells to a certain extent in a dose-dependent manner. Thus, PL-N1 may be developed as a potential, natural antitumor agent and functional food.

  9. Gait feature extraction in Parkinson's disease using low-cost accelerometers.

    PubMed

    Stamatakis, Julien; Crémers, Julien; Maquet, Didier; Macq, Benoit; Garraux, Gaëtan

    2011-01-01

    The clinical hallmarks of Parkinson's disease (PD) are movement poverty and slowness (i.e. bradykinesia), muscle rigidity, limb tremor or gait disturbances. Parkinson's gait includes slowness, shuffling, short steps, freezing of gait (FoG) and/or asymmetries in gait. There are currently no validated clinical instruments or device that allow a full characterization of gait disturbances in PD. As a step towards this goal, a four accelerometer-based system is proposed to increase the number of parameters that can be extracted to characterize parkinsonian gait disturbances such as FoG or gait asymmetries. After developing the hardware, an algorithm has been developed, that automatically epoched the signals on a stride-by-stride basis and quantified, among others, the gait velocity, the stride time, the stance and swing phases, the single and double support phases or the maximum acceleration at toe-off, as validated by visual inspection of video recordings during the task. The results obtained in a PD patient and a healthy volunteer are presented. The FoG detection will be improved using time-frequency analysis and the system is about to be validated with a state-of-the-art 3D movement analysis system.

  10. Computer-aided diagnosis of interstitial lung disease: a texture feature extraction and classification approach

    NASA Astrophysics Data System (ADS)

    Vargas-Voracek, Rene; McAdams, H. Page; Floyd, Carey E., Jr.

    1998-06-01

    An approach for the classification of normal or abnormal lung parenchyma from selected regions of interest (ROIs) of chest radiographs is presented for computer aided diagnosis of interstitial lung disease (ILD). The proposed approach uses a feed-forward neural network to classify each ROI based on a set of isotropic texture measures obtained from the joint grey level distribution of pairs of pixels separated by a specific distance. Two hundred ROIs, each 64 X 64 pixels in size (11 X 11 mm), were extracted from digitized chest radiographs for testing. Diagnosis performance was evaluated with the leave-one-out method. Classification of independent ROIs achieved a sensitivity of 90% and a specificity of 84% with an area under the receiver operating characteristic curve of 0.85. The diagnosis for each patient was correct for all cases when a `majority vote' criterion for the classification of the corresponding ROIs was applied to issue a normal or ILD patient classification. The proposed approach is a simple, fast, and consistent method for computer aided diagnosis of ILD with a very good performance. Further research will include additional cases, including differential diagnosis among ILD manifestations.

  11. Analysis of Unresolved Spectral Infrared Signature for the Extraction of Invariant Features

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Payne, T.; Wilhelm, S.; Gregory, S.; Skinner, M.; Rudy, R.; Russell, R.; Brown, J.; Dao, P.

    2010-09-01

    This paper demonstrates a simple analytical technique for extraction of spectral radiance values for the solar panel and body from an unresolved spectral infrared signature of 3-axis stabilized low-earth orbit (LEO) satellites. It uses data collected by The Aerospace Corporation’s Broad-band Array Spectrograph System (BASS) instrument at the Air Force Maui Optical and Supercomputing (AMOS) site. The observation conditions were such that the signatures were due to the emissive phenomenology and contribution of earthshine was negligible. The analysis is based on a two-facet orientation model of the satellite. This model captures the basic, known behavior of the satellite body and its solar panels. One facet points to nadir and the second facet tracks the sun. The facet areas are unknown. Special conditions are determined on the basis of observational geometry that allows separation of the spectral radiance values of the solar panel and body. These values remain unchanged (i.e., are invariant) under steady illumination conditions even if the signature appears different from one observation to another. In addition, they provide information on the individual spectral makeup of the satellite solar panel and body materials.

  12. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  13. Adaptation of the muscles of mastication to the flat skull feature in the polar bear (Ursus maritimus).

    PubMed

    Sasaki, M; Endo, H; Yamagiwa, D; Takagi, H; Arishima, K; Makita, T; Hayashi, Y

    2000-01-01

    The muscles of mastication of the polar bear (Ursus maritimus) and those of the brown bear (U. arctos) were examined by anatomical approach. In addition, the examination of the skull was carried out in the polar bear, brown bear and giant panda (Ailuropoda melanoleuca). In the polar bear, the rostro-ventral part of the superficial layer of the M. masseter possessed the abundant fleshy portion folded in the rostral and lateral directions like an accordion. Moreover, the rostro-medial area of the superficial layer became hollow in the nuchal direction when the mouth was closed. The M. temporalis of the polar bear covered up the anterior border of the coronoid process of the mandible and occupied the almost entire area of the cranial surface. The M. pterygoideus medialis of the polar bear was inserted on the ventral border of the mandible and on the ventral part of the temporal bone more widely than that of the brown bear. As results of our measurements of the mandible, an effect of the leverage in the polar bear was the smallest in three species. In the polar bear, the skull was flat, and the space between zygomatic arch and ventral border of the mandible, occupied by the M. masseter was the narrowest. It is suggested that the muscles of mastication of the polar bear is adapted to the flat skull feature for supplementing the functions. PMID:10676883

  14. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  15. Adaptation of the muscles of mastication to the flat skull feature in the polar bear (Ursus maritimus).

    PubMed

    Sasaki, M; Endo, H; Yamagiwa, D; Takagi, H; Arishima, K; Makita, T; Hayashi, Y

    2000-01-01

    The muscles of mastication of the polar bear (Ursus maritimus) and those of the brown bear (U. arctos) were examined by anatomical approach. In addition, the examination of the skull was carried out in the polar bear, brown bear and giant panda (Ailuropoda melanoleuca). In the polar bear, the rostro-ventral part of the superficial layer of the M. masseter possessed the abundant fleshy portion folded in the rostral and lateral directions like an accordion. Moreover, the rostro-medial area of the superficial layer became hollow in the nuchal direction when the mouth was closed. The M. temporalis of the polar bear covered up the anterior border of the coronoid process of the mandible and occupied the almost entire area of the cranial surface. The M. pterygoideus medialis of the polar bear was inserted on the ventral border of the mandible and on the ventral part of the temporal bone more widely than that of the brown bear. As results of our measurements of the mandible, an effect of the leverage in the polar bear was the smallest in three species. In the polar bear, the skull was flat, and the space between zygomatic arch and ventral border of the mandible, occupied by the M. masseter was the narrowest. It is suggested that the muscles of mastication of the polar bear is adapted to the flat skull feature for supplementing the functions.

  16. On the use of wavelet for extracting feature patterns from Multitemporal google earth satellite data sets

    NASA Astrophysics Data System (ADS)

    Lasaponara, R.

    2012-04-01

    The great amount of multispectral VHR satellite images, even available free of charge in Google earth has opened new strategic challenges in the field of remote sensing for archaeological studies. These challenges substantially deal with: (i) the strategic exploitation of satellite data as much as possible, (ii) the setting up of effective and reliable automatic and/or semiautomatic data processing strategies and (iii) the integration with other data sources from documentary resources to the traditional ground survey, historical documentation, geophysical prospection, etc. VHR satellites provide high resolution data which can improve knowledge on past human activities providing precious qualitative and quantitative information developed to such an extent that currently they share many of the physical characteristics of aerial imagery. This makes them ideal for investigations ranging from a local to a regional scale (see. for example, Lasaponara and Masini 2006a,b, 2007a, 2011; Masini and Lasaponara 2006, 2007, Sparavigna, 2010). Moreover, satellite data are still the only data source for research performed in areas where aerial photography is restricted because of military or political reasons. Among the main advantages of using satellite remote sensing compared to traditional field archaeology herein we briefly focalize on the use of wavelet data processing for enhancing google earth satellite data with particular reference to multitemporal datasets. Study areas selected from Southern Italy, Middle East and South America are presented and discussed. Results obtained point out the use of automatic image enhancement can successfully applied as first step of supervised classification and intelligent data analysis for semiautomatic identification of features of archaeological interest. Reference Lasaponara R, Masini N (2006a) On the potential of panchromatic and multispectral Quickbird data for archaeological prospection. Int J Remote Sens 27: 3607-3614. Lasaponara R

  17. On the use of wavelet for extracting feature patterns from Multitemporal google earth satellite data sets

    NASA Astrophysics Data System (ADS)

    Lasaponara, R.

    2012-04-01

    The great amount of multispectral VHR satellite images, even available free of charge in Google earth has opened new strategic challenges in the field of remote sensing for archaeological studies. These challenges substantially deal with: (i) the strategic exploitation of satellite data as much as possible, (ii) the setting up of effective and reliable automatic and/or semiautomatic data processing strategies and (iii) the integration with other data sources from documentary resources to the traditional ground survey, historical documentation, geophysical prospection, etc. VHR satellites provide high resolution data which can improve knowledge on past human activities providing precious qualitative and quantitative information developed to such an extent that currently they share many of the physical characteristics of aerial imagery. This makes them ideal for investigations ranging from a local to a regional scale (see. for example, Lasaponara and Masini 2006a,b, 2007a, 2011; Masini and Lasaponara 2006, 2007, Sparavigna, 2010). Moreover, satellite data are still the only data source for research performed in areas where aerial photography is restricted because of military or political reasons. Among the main advantages of using satellite remote sensing compared to traditional field archaeology herein we briefly focalize on the use of wavelet data processing for enhancing google earth satellite data with particular reference to multitemporal datasets. Study areas selected from Southern Italy, Middle East and South America are presented and discussed. Results obtained point out the use of automatic image enhancement can successfully applied as first step of supervised classification and intelligent data analysis for semiautomatic identification of features of archaeological interest. Reference Lasaponara R, Masini N (2006a) On the potential of panchromatic and multispectral Quickbird data for archaeological prospection. Int J Remote Sens 27: 3607-3614. Lasaponara R

  18. GEOEYE-1 Satellite Stereo-Pair DEM Extraction Using Scale-Invariant Feature Transform on a Parallel Processing Platform

    NASA Astrophysics Data System (ADS)

    Daliakopoulos, Ioannis; Tsanis, Ioannis

    2013-04-01

    A module for Digital Elevation Model (DEM) extraction from Very High Resolution (VHR) satellite stereo-pair imagery was developed. A procedure for parallel processing of cascading image tiles is used for handling the large datasets requirements of VHR satellite imagery. The Scale-Invariant Feature Transform (SIFT) algorithm is used to detect potentially homogeneous features in the members of the stereo-pair. The resulting feature pairs are filtered using the RANdom SAmple Consensus (RANSAC) algorithm by using a variable distance threshold. Finally, homogeneous pairs are converted to point cloud ground coordinates for DEM generation. The module is tested with a 0.5mx0.5m Geoeye-1 stereo-pair acquired over an area of 25sqkm in the island of Crete, Greece. A sensitivity analysis is performed to determine the optimum module parameterization. The criteria of average point spacing irregularity is introduced to evaluate the quality and assess the effective resolution of the produced DEMs. The resulting 1.5mx1.5m DEM has superior detail over the 2m and 5m DEMs used as reference and yields a Root Mean Square Error (RMSE) of about 1m compared to ground truth measurements.

  19. Rapid discrimination and feature extraction of three Chamaecyparis species by static-HS/GC-MS.

    PubMed

    Chen, Ying-Ju; Lin, Chun-Ya; Cheng, Sen-Sung; Chang, Shang-Tzen

    2015-01-28

    This study aimed to develop a rapid and accurate analytical method for discriminating three Chamaecyparis species (C. formosensis, C. obtusa, and C. obtusa var. formosana) that could not be easily distinguished by volatile compounds. A total of 23 leaf samples from three species were analyzed by static-headspace (static-HS) coupled with gas chromatography-mass spectrometry (GC-MS). The static-HS procedure, whose experimental parameters were properly optimized, yielded a high Pearson correlation-based similarity between essential oil and VOC composition (r = 0.555-0.999). Thirty-six major constituents were identified; along with the results of cluster analysis (CA), a large variation in contents among the three different species was observed. Principal component analysis (PCA) methods illustrated graphically the relationships between characteristic components and tree species. It was clearly demonstrated that the static-HS-based procedure enhanced greatly the speed of precise analysis of chemical fingerprint in small sample amounts, thus providing a fast and reliable tool for the prediction of constituent characteristics in essential oil, and also offering good opportunities for studying the role of these feature compounds in chemotaxonomy or ecophysiology. PMID:25590241

  20. Rapid discrimination and feature extraction of three Chamaecyparis species by static-HS/GC-MS.

    PubMed

    Chen, Ying-Ju; Lin, Chun-Ya; Cheng, Sen-Sung; Chang, Shang-Tzen

    2015-01-28

    This study aimed to develop a rapid and accurate analytical method for discriminating three Chamaecyparis species (C. formosensis, C. obtusa, and C. obtusa var. formosana) that could not be easily distinguished by volatile compounds. A total of 23 leaf samples from three species were analyzed by static-headspace (static-HS) coupled with gas chromatography-mass spectrometry (GC-MS). The static-HS procedure, whose experimental parameters were properly optimized, yielded a high Pearson correlation-based similarity between essential oil and VOC composition (r = 0.555-0.999). Thirty-six major constituents were identified; along with the results of cluster analysis (CA), a large variation in contents among the three different species was observed. Principal component analysis (PCA) methods illustrated graphically the relationships between characteristic components and tree species. It was clearly demonstrated that the static-HS-based procedure enhanced greatly the speed of precise analysis of chemical fingerprint in small sample amounts, thus providing a fast and reliable tool for the prediction of constituent characteristics in essential oil, and also offering good opportunities for studying the role of these feature compounds in chemotaxonomy or ecophysiology.

  1. Study on image feature extraction and classification for human colorectal cancer using optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei

    2011-06-01

    Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.

  2. A Method of Three-Dimensional Recording of Mandibular Movement Based on Two-Dimensional Image Feature Extraction

    PubMed Central

    Li, Zhongke; Yang, Huifang; Lü, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Background and Objective To develop a real-time recording system based on computer binocular vision and two-dimensional image feature extraction to accurately record mandibular movement in three dimensions. Methods A computer-based binocular vision device with two digital cameras was used in conjunction with a fixed head retention bracket to track occlusal movement. Software was developed for extracting target spatial coordinates in real time based on two-dimensional image feature recognition. A plaster model of a subject’s upper and lower dentition were made using conventional methods. A mandibular occlusal splint was made on the plaster model, and then the occlusal surface was removed. Temporal denture base resin was used to make a 3-cm handle extending outside the mouth connecting the anterior labial surface of the occlusal splint with a detection target with intersecting lines designed for spatial coordinate extraction. The subject's head was firmly fixed in place, and the occlusal splint was fully seated on the mandibular dentition. The subject was then asked to make various mouth movements while the mandibular movement target locus point set was recorded. Comparisons between the coordinate values and the actual values of the 30 intersections on the detection target were then analyzed using paired t-tests. Results The three-dimensional trajectory curve shapes of the mandibular movements were consistent with the respective subject movements. Mean XYZ coordinate values and paired t-test results were as follows: X axis: -0.0037 ± 0.02953, P = 0.502; Y axis: 0.0037 ± 0.05242, P = 0.704; and Z axis: 0.0007 ± 0.06040, P = 0.952. The t-test result showed that the coordinate values of the 30 cross points were considered statistically no significant. (P<0.05) Conclusions Use of a real-time recording system of three-dimensional mandibular movement based on computer binocular vision and two-dimensional image feature recognition technology produced a recording

  3. An Integrated Front-End Readout And Feature Extraction System for the BaBar Drift Chamber

    SciTech Connect

    Zhang, Jinlong; /Colorado U.

    2006-08-10

    The BABAR experiment has been operating at SLAC's PEP-II asymmetric B-Factory since 1999. The accelerator has achieved more than three times its original design luminosity of 3 x 10{sup 33} cm{sup -2} s{sup -1}, with plans for an additional factor of three in the next two years. To meet the experiment's performance requirements in the face of significantly higher trigger and background rates, the drift chamber's front-end readout system has been redesigned around the Xilinx Spartan 3 FPGA. The new system implements analysis and feature-extraction of digitized waveforms in the front-end, reducing the data bandwidth required by a factor of four.

  4. Sensors Fusion based Online Mapping and Features Extraction of Mobile Robot in the Road Following and Roundabout

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Mailah, Musa; Yussof, Wan Azhar B.; Hamedon, Zamzuri B.; Yussof, Zulkifli B.; Majeed, Anwar P. P.

    2016-02-01

    A road feature extraction based mapping system using a sensor fusion technique for mobile robot navigation in road environments is presented in this paper. The online mapping of mobile robot is performed continuously in the road environments to find the road properties that enable the robot to move from a certain start position to pre-determined goal while discovering and detecting the roundabout. The sensors fusion involving laser range finder, camera and odometry which are installed in a new platform, are used to find the path of the robot and localize it within its environments. The local maps are developed using camera and laser range finder to recognize the roads borders parameters such as road width, curbs and roundabout. Results show the capability of the robot with the proposed algorithms to effectively identify the road environments and build a local mapping for road following and roundabout.

  5. Automated feature extraction for the classification of human in vivo 13C NMR spectra using statistical pattern recognition and wavelets.

    PubMed

    Tate, A R; Watson, D; Eglen, S; Arvanitis, T N; Thomas, E L; Bell, J D

    1996-06-01

    If magnetic resonance spectroscopy (MRS) is to become a useful tool in clinical medicine, it will be necessary to find reliable methods for analyzing and classifying MRS data. Automated methods are desirable because they can remove user bias and can deal with large amounts of data, allowing the use of all the available information. In this study, techniques for automatically extracting features for the classification of MRS in vivo data are investigated. Among the techniques used were wavelets, principal component analysis, and linear discriminant function analysis. These techniques were tested on a set of 75 in vivo 13C spectra of human adipose tissue from subjects from three different dietary groups (vegan, vegetarian, and omnivore). It was found that it was possible to assign automatically 94% of the vegans and omnivores to their correct dietary groups, without the need for explicit identification or measurement of peaks.

  6. On-line Flagging of Anomalies and Adaptive Sequential Hypothesis Testing for Fine-feature Characterization of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Payne, T.; Kinateder, K.; Dao, P.; Beecher, E.; Boone, D.; Elliott, B.

    The objective of on-line flagging in this paper is to perform interactive assessment of geosynchronous satellites anomalies such as cross-tagging of a satellites in a cluster, solar panel offset change, etc. This assessment will utilize a Bayesian belief propagation procedure and will include automated update of baseline signature data for the satellite, while accounting for the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data collected during the synoptic search performed by a ground or space-based sensor as a part of its metrics mission. The change in the satellite features will be reported along with the probabilities of Type I and Type II errors. The objective of adaptive sequential hypothesis testing in this paper is to define future sensor tasking for the purpose of characterization of fine features of the satellite. The tasking will be designed in order to maximize new information with the least number of photometry data points to be collected during the synoptic search by a ground or space-based sensor. Its calculation is based on the utilization of information entropy techniques. The tasking is defined by considering a sequence of hypotheses in regard to the fine features of the satellite. The optimal observation conditions are then ordered in order to maximize new information about a chosen fine feature. The combined objective of on-line flagging and adaptive sequential hypothesis testing is to progressively discover new information about the features of a geosynchronous satellites by leveraging the regular but sparse cadence of data collection during the synoptic search performed by a ground or space-based sensor. Automated Algorithm to Detect Changes in Geostationary Satellite's Configuration and Cross-Tagging Phan Dao, Air Force Research Laboratory/RVB By characterizing geostationary satellites based on photometry and color photometry, analysts can

  7. Comparison of sEMG-Based Feature Extraction and Motion Classification Methods for Upper-Limb Movement

    PubMed Central

    Guo, Shuxiang; Pang, Muye; Gao, Baofeng; Hirata, Hideyuki; Ishihara, Hidenori

    2015-01-01

    The surface electromyography (sEMG) technique is proposed for muscle activation detection and intuitive control of prostheses or robot arms. Motion recognition is widely used to map sEMG signals to the target motions. One of the main factors preventing the implementation of this kind of method for real-time applications is the unsatisfactory motion recognition rate and time consumption. The purpose of this paper is to compare eight combinations of four feature extraction methods (Root Mean Square (RMS), Detrended Fluctuation Analysis (DFA), Weight Peaks (WP), and Muscular Model (MM)) and two classifiers (Neural Networks (NN) and Support Vector Machine (SVM)), for the task of mapping sEMG signals to eight upper-limb motions, to find out the relation between these methods and propose a proper combination to solve this issue. Seven subjects participated in the experiment and six muscles of the upper-limb were selected to record sEMG signals. The experimental results showed that NN classifier obtained the highest recognition accuracy rate (88.7%) during the training process while SVM performed better in real-time experiments (85.9