Sample records for recognition method combining

  1. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  2. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  3. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition

    PubMed Central

    2017-01-01

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094

  4. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.

    PubMed

    Choi, Hyo-Rim; Kim, TaeYong

    2017-08-17

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.

  5. Facial expression recognition based on improved deep belief networks

    NASA Astrophysics Data System (ADS)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  6. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  7. Combining 1D and 2D linear discriminant analysis for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Ji, Hongbing; Wang, Lei; Lin, Lin

    2011-11-01

    In this paper, a novel feature extraction method for palmprint recognition termed as Two-dimensional Combined Discriminant Analysis (2DCDA) is proposed. By connecting the adjacent rows of a image sequentially, the obtained new covariance matrices contain the useful information among local geometry structures in the image, which is eliminated by 2DLDA. In this way, 2DCDA combines LDA and 2DLDA for a promising recognition accuracy, but the number of coefficients of its projection matrix is lower than that of other two-dimensional methods. Experimental results on the CASIA palmprint database demonstrate the effectiveness of the proposed method.

  8. Multimodal biometric method that combines veins, prints, and shape of a finger

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo

    2011-01-01

    Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.

  9. Methods and Systems for Representing, Using and Displaying Time-Varying Information on the Semantic Web

    DTIC Science & Technology

    2013-11-26

    Combination with Simple Features," lEE European Workshop on Handwriting Analysis and Recognition, pp. 6/1-6, Brussels, Jul. 1994. Bock, J., et a...Document Analysis and Recognition, pp. 147-150, Oct. 1993. Starner, T., eta!., "On-Line Cursive Handwriting Recognition Using Speech Recognition Methods

  10. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    NASA Astrophysics Data System (ADS)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  11. Fast cat-eye effect target recognition based on saliency extraction

    NASA Astrophysics Data System (ADS)

    Li, Li; Ren, Jianlin; Wang, Xingbin

    2015-09-01

    Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.

  12. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  13. Deep kernel learning method for SAR image target recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  14. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-07-21

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images.

  15. Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. PMID:27455264

  16. Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.

    PubMed

    Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G

    2017-09-01

    To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.

  17. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  18. Discrimination of Medicine Radix Astragali from Different Geographic Origins Using Multiple Spectroscopies Combined with Data Fusion Methods

    NASA Astrophysics Data System (ADS)

    Wang, Hai-Yan; Song, Chao; Sha, Min; Liu, Jun; Li, Li-Ping; Zhang, Zheng-Yong

    2018-05-01

    Raman spectra and ultraviolet-visible absorption spectra of four different geographic origins of Radix Astragali were collected. These data were analyzed using kernel principal component analysis combined with sparse representation classification. The results showed that the recognition rate reached 70.44% using Raman spectra for data input and 90.34% using ultraviolet-visible absorption spectra for data input. A new fusion method based on Raman combined with ultraviolet-visible data was investigated and the recognition rate was increased to 96.43%. The experimental results suggested that the proposed data fusion method effectively improved the utilization rate of the original data.

  19. Summary of tracking and identification methods

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Yang, Chun; Kadar, Ivan

    2014-06-01

    Over the last two decades, many solutions have arisen to combine target tracking estimation with classification methods. Target tracking includes developments from linear to non-linear and Gaussian to non-Gaussian processing. Pattern recognition includes detection, classification, recognition, and identification methods. Integrating tracking and pattern recognition has resulted in numerous approaches and this paper seeks to organize the various approaches. We discuss the terminology so as to have a common framework for various standards such as the NATO STANAG 4162 - Identification Data Combining Process. In a use case, we provide a comparative example highlighting that location information (as an example) with additional mission objectives from geographical, human, social, cultural, and behavioral modeling is needed to determine identification as classification alone does not allow determining identification or intent.

  20. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  1. Face recognition using slow feature analysis and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  2. Chemical Entity Recognition and Resolution to ChEBI

    PubMed Central

    Grego, Tiago; Pesquita, Catia; Bastos, Hugo P.; Couto, Francisco M.

    2012-01-01

    Chemical entities are ubiquitous through the biomedical literature and the development of text-mining systems that can efficiently identify those entities are required. Due to the lack of available corpora and data resources, the community has focused its efforts in the development of gene and protein named entity recognition systems, but with the release of ChEBI and the availability of an annotated corpus, this task can be addressed. We developed a machine-learning-based method for chemical entity recognition and a lexical-similarity-based method for chemical entity resolution and compared them with Whatizit, a popular-dictionary-based method. Our methods outperformed the dictionary-based method in all tasks, yielding an improvement in F-measure of 20% for the entity recognition task, 2–5% for the entity-resolution task, and 15% for combined entity recognition and resolution tasks. PMID:25937941

  3. Combining heterogenous features for 3D hand-held object recognition

    NASA Astrophysics Data System (ADS)

    Lv, Xiong; Wang, Shuang; Li, Xiangyang; Jiang, Shuqiang

    2014-10-01

    Object recognition has wide applications in the area of human-machine interaction and multimedia retrieval. However, due to the problem of visual polysemous and concept polymorphism, it is still a great challenge to obtain reliable recognition result for the 2D images. Recently, with the emergence and easy availability of RGB-D equipment such as Kinect, this challenge could be relieved because the depth channel could bring more information. A very special and important case of object recognition is hand-held object recognition, as hand is a straight and natural way for both human-human interaction and human-machine interaction. In this paper, we study the problem of 3D object recognition by combining heterogenous features with different modalities and extraction techniques. For hand-craft feature, although it reserves the low-level information such as shape and color, it has shown weakness in representing hiconvolutionalgh-level semantic information compared with the automatic learned feature, especially deep feature. Deep feature has shown its great advantages in large scale dataset recognition but is not always robust to rotation or scale variance compared with hand-craft feature. In this paper, we propose a method to combine hand-craft point cloud features and deep learned features in RGB and depth channle. First, hand-held object segmentation is implemented by using depth cues and human skeleton information. Second, we combine the extracted hetegerogenous 3D features in different stages using linear concatenation and multiple kernel learning (MKL). Then a training model is used to recognize 3D handheld objects. Experimental results validate the effectiveness and gerneralization ability of the proposed method.

  4. Automatic concept extraction from spoken medical reports.

    PubMed

    Happe, André; Pouliquen, Bruno; Burgun, Anita; Cuggia, Marc; Le Beux, Pierre

    2003-07-01

    The objective of this project is to investigate methods whereby a combination of speech recognition and automated indexing methods substitute for current transcription and indexing practices. We based our study on existing speech recognition software programs and on NOMINDEX, a tool that extracts MeSH concepts from medical text in natural language and that is mainly based on a French medical lexicon and on the UMLS. For each document, the process consists of three steps: (1) dictation and digital audio recording, (2) speech recognition, (3) automatic indexing. The evaluation consisted of a comparison between the set of concepts extracted by NOMINDEX after the speech recognition phase and the set of keywords manually extracted from the initial document. The method was evaluated on a set of 28 patient discharge summaries extracted from the MENELAS corpus in French, corresponding to in-patients admitted for coronarography. The overall precision was 73% and the overall recall was 90%. Indexing errors were mainly due to word sense ambiguity and abbreviations. A specific issue was the fact that the standard French translation of MeSH terms lacks diacritics. A preliminary evaluation of speech recognition tools showed that the rate of accurate recognition was higher than 98%. Only 3% of the indexing errors were generated by inadequate speech recognition. We discuss several areas to focus on to improve this prototype. However, the very low rate of indexing errors due to speech recognition errors highlights the potential benefits of combining speech recognition techniques and automatic indexing.

  5. A study of speech emotion recognition based on hybrid algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Ju-xia; Zhang, Chao; Lv, Zhao; Rao, Yao-quan; Wu, Xiao-pei

    2011-10-01

    To effectively improve the recognition accuracy of the speech emotion recognition system, a hybrid algorithm which combines Continuous Hidden Markov Model (CHMM), All-Class-in-One Neural Network (ACON) and Support Vector Machine (SVM) is proposed. In SVM and ACON methods, some global statistics are used as emotional features, while in CHMM method, instantaneous features are employed. The recognition rate by the proposed method is 92.25%, with the rejection rate to be 0.78%. Furthermore, it obtains the relative increasing of 8.53%, 4.69% and 0.78% compared with ACON, CHMM and SVM methods respectively. The experiment result confirms the efficiency of distinguishing anger, happiness, neutral and sadness emotional states.

  6. Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Jane, Archana P.; Pund, Mukesh A.

    2012-03-01

    The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.

  7. Counter-propagation network with variable degree variable step size LMS for single switch typing recognition.

    PubMed

    Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh

    2004-01-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  8. 3D automatic anatomy recognition based on iterative graph-cut-ASM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Udupa, Jayaram K.; Bagci, Ulas; Alavi, Abass; Torigian, Drew A.

    2010-02-01

    We call the computerized assistive process of recognizing, delineating, and quantifying organs and tissue regions in medical imaging, occurring automatically during clinical image interpretation, automatic anatomy recognition (AAR). The AAR system we are developing includes five main parts: model building, object recognition, object delineation, pathology detection, and organ system quantification. In this paper, we focus on the delineation part. For the modeling part, we employ the active shape model (ASM) strategy. For recognition and delineation, we integrate several hybrid strategies of combining purely image based methods with ASM. In this paper, an iterative Graph-Cut ASM (IGCASM) method is proposed for object delineation. An algorithm called GC-ASM was presented at this symposium last year for object delineation in 2D images which attempted to combine synergistically ASM and GC. Here, we extend this method to 3D medical image delineation. The IGCASM method effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. We propose a new GC cost function, which effectively integrates the specific image information with the ASM shape model information. The proposed methods are tested on a clinical abdominal CT data set. The preliminary results show that: (a) it is feasible to explicitly bring prior 3D statistical shape information into the GC framework; (b) the 3D IGCASM delineation method improves on ASM and GC and can provide practical operational time on clinical images.

  9. Good initialization model with constrained body structure for scene text recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Anna; Wang, Guoyou; Dong, Yangbo

    2016-09-01

    Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.

  10. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    NASA Astrophysics Data System (ADS)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.

  11. Comparison of sEMG-Based Feature Extraction and Motion Classification Methods for Upper-Limb Movement

    PubMed Central

    Guo, Shuxiang; Pang, Muye; Gao, Baofeng; Hirata, Hideyuki; Ishihara, Hidenori

    2015-01-01

    The surface electromyography (sEMG) technique is proposed for muscle activation detection and intuitive control of prostheses or robot arms. Motion recognition is widely used to map sEMG signals to the target motions. One of the main factors preventing the implementation of this kind of method for real-time applications is the unsatisfactory motion recognition rate and time consumption. The purpose of this paper is to compare eight combinations of four feature extraction methods (Root Mean Square (RMS), Detrended Fluctuation Analysis (DFA), Weight Peaks (WP), and Muscular Model (MM)) and two classifiers (Neural Networks (NN) and Support Vector Machine (SVM)), for the task of mapping sEMG signals to eight upper-limb motions, to find out the relation between these methods and propose a proper combination to solve this issue. Seven subjects participated in the experiment and six muscles of the upper-limb were selected to record sEMG signals. The experimental results showed that NN classifier obtained the highest recognition accuracy rate (88.7%) during the training process while SVM performed better in real-time experiments (85.9%). For time consumption, SVM took less time than NN during the training process but needed more time for real-time computation. Among the four feature extraction methods, WP had the highest recognition rate for the training process (97.7%) while MM performed the best during real-time tests (94.3%). The combination of MM and NN is recommended for strict real-time applications while a combination of MM and SVM will be more suitable when time consumption is not a key requirement. PMID:25894941

  12. Three-dimensional fingerprint recognition by using convolution neural network

    NASA Astrophysics Data System (ADS)

    Tian, Qianyu; Gao, Nan; Zhang, Zonghua

    2018-01-01

    With the development of science and technology and the improvement of social information, fingerprint recognition technology has become a hot research direction and been widely applied in many actual fields because of its feasibility and reliability. The traditional two-dimensional (2D) fingerprint recognition method relies on matching feature points. This method is not only time-consuming, but also lost three-dimensional (3D) information of fingerprint, with the fingerprint rotation, scaling, damage and other issues, a serious decline in robustness. To solve these problems, 3D fingerprint has been used to recognize human being. Because it is a new research field, there are still lots of challenging problems in 3D fingerprint recognition. This paper presents a new 3D fingerprint recognition method by using a convolution neural network (CNN). By combining 2D fingerprint and fingerprint depth map into CNN, and then through another CNN feature fusion, the characteristics of the fusion complete 3D fingerprint recognition after classification. This method not only can preserve 3D information of fingerprints, but also solves the problem of CNN input. Moreover, the recognition process is simpler than traditional feature point matching algorithm. 3D fingerprint recognition rate by using CNN is compared with other fingerprint recognition algorithms. The experimental results show that the proposed 3D fingerprint recognition method has good recognition rate and robustness.

  13. Secure method for biometric-based recognition with integrated cryptographic functions.

    PubMed

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  14. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  15. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  16. Utterance independent bimodal emotion recognition in spontaneous communication

    NASA Astrophysics Data System (ADS)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  17. A multi-view face recognition system based on cascade face detector and improved Dlib

    NASA Astrophysics Data System (ADS)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  18. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  19. Multi-pose facial correction based on Gaussian process with combined kernel function

    NASA Astrophysics Data System (ADS)

    Shi, Shuyan; Ji, Ruirui; Zhang, Fan

    2018-04-01

    In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.

  20. Comparing ensemble learning methods based on decision tree classifiers for protein fold recognition.

    PubMed

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi

    2014-01-01

    In this paper, some methods for ensemble learning of protein fold recognition based on a decision tree (DT) are compared and contrasted against each other over three datasets taken from the literature. According to previously reported studies, the features of the datasets are divided into some groups. Then, for each of these groups, three ensemble classifiers, namely, random forest, rotation forest and AdaBoost.M1 are employed. Also, some fusion methods are introduced for combining the ensemble classifiers obtained in the previous step. After this step, three classifiers are produced based on the combination of classifiers of types random forest, rotation forest and AdaBoost.M1. Finally, the three different classifiers achieved are combined to make an overall classifier. Experimental results show that the overall classifier obtained by the genetic algorithm (GA) weighting fusion method, is the best one in comparison to previously applied methods in terms of classification accuracy.

  1. Research on Attribute Reduction in Hoisting Motor State Recognition of Quayside Container Crane

    NASA Astrophysics Data System (ADS)

    Li, F.; Tang, G.; Hu, X.

    2017-07-01

    In view of too many attributes in hoisting motor state recognition of quayside container crane. Attribute reduction method based on discernibility matrix is introduced to attribute reduction of lifting motor state information table. A method of attribute reduction based on the combination of rough set and genetic algorithm is proposed to deal with the hoisting motor state decision table. Under the condition that the information system's decision-making ability is unchanged, the redundant attribute is deleted. Which reduces the complexity and computation of the recognition process of the hoisting motor. It is possible to realize the fast state recognition.

  2. Protein fold recognition using geometric kernel data fusion.

    PubMed

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  3. Varying face occlusion detection and iterative recovery for face recognition

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  4. SD-MSAEs: Promoter recognition in human genome based on deep feature extraction.

    PubMed

    Xu, Wenxuan; Zhang, Li; Lu, Yaping

    2016-06-01

    The prediction and recognition of promoter in human genome play an important role in DNA sequence analysis. Entropy, in Shannon sense, of information theory is a multiple utility in bioinformatic details analysis. The relative entropy estimator methods based on statistical divergence (SD) are used to extract meaningful features to distinguish different regions of DNA sequences. In this paper, we choose context feature and use a set of methods of SD to select the most effective n-mers distinguishing promoter regions from other DNA regions in human genome. Extracted from the total possible combinations of n-mers, we can get four sparse distributions based on promoter and non-promoters training samples. The informative n-mers are selected by optimizing the differentiating extents of these distributions. Specially, we combine the advantage of statistical divergence and multiple sparse auto-encoders (MSAEs) in deep learning to extract deep feature for promoter recognition. And then we apply multiple SVMs and a decision model to construct a human promoter recognition method called SD-MSAEs. Framework is flexible that it can integrate new feature extraction or new classification models freely. Experimental results show that our method has high sensitivity and specificity. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  6. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  7. Surgical gesture segmentation and recognition.

    PubMed

    Tao, Lingling; Zappella, Luca; Hager, Gregory D; Vidal, René

    2013-01-01

    Automatic surgical gesture segmentation and recognition can provide useful feedback for surgical training in robotic surgery. Most prior work in this field relies on the robot's kinematic data. Although recent work [1,2] shows that the robot's video data can be equally effective for surgical gesture recognition, the segmentation of the video into gestures is assumed to be known. In this paper, we propose a framework for joint segmentation and recognition of surgical gestures from kinematic and video data. Unlike prior work that relies on either frame-level kinematic cues, or segment-level kinematic or video cues, our approach exploits both cues by using a combined Markov/semi-Markov conditional random field (MsM-CRF) model. Our experiments show that the proposed model improves over a Markov or semi-Markov CRF when using video data alone, gives results that are comparable to state-of-the-art methods on kinematic data alone, and improves over state-of-the-art methods when combining kinematic and video data.

  8. [Study on the early detection of Sclerotinia of Brassica napus based on combinational-stimulated bands].

    PubMed

    Liu, Fei; Feng, Lei; Lou, Bing-gan; Sun, Guang-ming; Wang, Lian-ping; He, Yong

    2010-07-01

    The combinational-stimulated bands were used to develop linear and nonlinear calibrations for the early detection of sclerotinia of oilseed rape (Brassica napus L.). Eighty healthy and 100 Sclerotinia leaf samples were scanned, and different preprocessing methods combined with successive projections algorithm (SPA) were applied to develop partial least squares (PLS) discriminant models, multiple linear regression (MLR) and least squares-support vector machine (LS-SVM) models. The results indicated that the optimal full-spectrum PLS model was achieved by direct orthogonal signal correction (DOSC), then De-trending and Raw spectra with correct recognition ratio of 100%, 95.7% and 95.7%, respectively. When using combinational-stimulated bands, the optimal linear models were SPA-MLR (DOSC) and SPA-PLS (DOSC) with correct recognition ratio of 100%. All SPA-LSSVM models using DOSC, De-trending and Raw spectra achieved perfect results with recognition of 100%. The overall results demonstrated that it was feasible to use combinational-stimulated bands for the early detection of Sclerotinia of oilseed rape, and DOSC-SPA was a powerful way for informative wavelength selection. This method supplied a new approach to the early detection and portable monitoring instrument of sclerotinia.

  9. Emotion recognition based on multiple order features using fractional Fourier transform

    NASA Astrophysics Data System (ADS)

    Ren, Bo; Liu, Deyin; Qi, Lin

    2017-07-01

    In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.

  10. Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions

    PubMed Central

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied. PMID:23762851

  11. Radiological reporting that combine continuous speech recognition with error correction by transcriptionists.

    PubMed

    Ichikawa, Tamaki; Kitanosono, Takashi; Koizumi, Jun; Ogushi, Yoichi; Tanaka, Osamu; Endo, Jun; Hashimoto, Takeshi; Kawada, Shuichi; Saito, Midori; Kobayashi, Makiko; Imai, Yutaka

    2007-12-20

    We evaluated the usefulness of radiological reporting that combines continuous speech recognition (CSR) and error correction by transcriptionists. Four transcriptionists (two with more than 10 years' and two with less than 3 months' transcription experience) listened to the same 100 dictation files and created radiological reports using conventional transcription and a method that combined CSR with manual error correction by the transcriptionists. We compared the 2 groups using the 2 methods for accuracy and report creation time and evaluated the transcriptionists' inter-personal dependence on accuracy rate and report creation time. We used a CSR system that did not require the training of the system to recognize the user's voice. We observed no significant difference in accuracy between the 2 groups and 2 methods that we tested, though transcriptionists with greater experience transcribed faster than those with less experience using conventional transcription. Using the combined method, error correction speed was not significantly different between two groups of transcriptionists with different levels of experience. Combining CSR and manual error correction by transcriptionists enabled convenient and accurate radiological reporting.

  12. Three-dimensional object recognition using similar triangles and decision trees

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly

    1993-01-01

    A system, TRIDEC, that is capable of distinguishing between a set of objects despite changes in the objects' positions in the input field, their size, or their rotational orientation in 3D space is described. TRIDEC combines very simple yet effective features with the classification capabilities of inductive decision tree methods. The feature vector is a list of all similar triangles defined by connecting all combinations of three pixels in a coarse coded 127 x 127 pixel input field. The classification is accomplished by building a decision tree using the information provided from a limited number of translated, scaled, and rotated samples. Simulation results are presented which show that TRIDEC achieves 94 percent recognition accuracy in the 2D invariant object recognition domain and 98 percent recognition accuracy in the 3D invariant object recognition domain after training on only a small sample of transformed views of the objects.

  13. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  14. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  15. Extraction and fusion of spectral parameters for face recognition

    NASA Astrophysics Data System (ADS)

    Boisier, B.; Billiot, B.; Abdessalem, Z.; Gouton, P.; Hardeberg, J. Y.

    2011-03-01

    Many methods have been developed in image processing for face recognition, especially in recent years with the increase of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range of the electromagnetic spectrum. The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either combining them or analyzing them separately in order to extract the most appropriate information for face recognition. We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible Spectrum.

  16. A sensor and video based ontology for activity recognition in smart environments.

    PubMed

    Mitchell, D; Morrow, Philip J; Nugent, Chris D

    2014-01-01

    Activity recognition is used in a wide range of applications including healthcare and security. In a smart environment activity recognition can be used to monitor and support the activities of a user. There have been a range of methods used in activity recognition including sensor-based approaches, vision-based approaches and ontological approaches. This paper presents a novel approach to activity recognition in a smart home environment which combines sensor and video data through an ontological framework. The ontology describes the relationships and interactions between activities, the user, objects, sensors and video data.

  17. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  18. Discovering the Sequential Structure of Thought

    ERIC Educational Resources Information Center

    Anderson, John R.; Fincham, Jon M.

    2014-01-01

    Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel…

  19. Cross spectral, active and passive approach to face recognition for improved performance

    NASA Astrophysics Data System (ADS)

    Grudzien, A.; Kowalski, M.; Szustakowski, M.

    2017-08-01

    Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.

  20. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  1. Off-lexicon online Arabic handwriting recognition using neural network

    NASA Astrophysics Data System (ADS)

    Yahia, Hamdi; Chaabouni, Aymen; Boubaker, Houcine; Alimi, Adel M.

    2017-03-01

    This paper highlights a new method for online Arabic handwriting recognition based on graphemes segmentation. The main contribution of our work is to explore the utility of Beta-elliptic model in segmentation and features extraction for online handwriting recognition. Indeed, our method consists in decomposing the input signal into continuous part called graphemes based on Beta-Elliptical model, and classify them according to their position in the pseudo-word. The segmented graphemes are then described by the combination of geometric features and trajectory shape modeling. The efficiency of the considered features has been evaluated using feed forward neural network classifier. Experimental results using the benchmarking ADAB Database show the performance of the proposed method.

  2. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  3. Body-Earth Mover's Distance: A Matching-Based Approach for Sleep Posture Recognition.

    PubMed

    Xu, Xiaowei; Lin, Feng; Wang, Aosen; Hu, Yu; Huang, Ming-Chun; Xu, Wenyao

    2016-10-01

    Sleep posture is a key component in sleep quality assessment and pressure ulcer prevention. Currently, body pressure analysis has been a popular method for sleep posture recognition. In this paper, a matching-based approach, Body-Earth Mover's Distance (BEMD), for sleep posture recognition is proposed. BEMD treats pressure images as weighted 2D shapes, and combines EMD and Euclidean distance for similarity measure. Compared with existing work, sleep posture recognition is achieved with posture similarity rather than multiple features for specific postures. A pilot study is performed with 14 persons for six different postures. The experimental results show that the proposed BEMD can achieve 91.21% accuracy, which outperforms the previous method with an improvement of 8.01%.

  4. Affect Recognition in Adults with ADHD

    ERIC Educational Resources Information Center

    Miller, Meghan; Hanford, Russell B.; Fassbender, Catherine; Duke, Marshall; Schweitzer, Julie B.

    2011-01-01

    Objective: This study compared affect recognition abilities between adults with and without ADHD. Method: The sample consisted of 51 participants (34 men, 17 women) divided into 3 groups: ADHD-combined type (ADHD-C; n = 17), ADHD-predominantly inattentive type (ADHD-I; n = 16), and controls (n = 18). The mean age was 34 years. Affect recognition…

  5. [Study on the genuineness and producing area of Panax notoginseng based on infrared spectroscopy combined with discriminant analysis].

    PubMed

    Liu, Fei; Wang, Yuan-zhong; Yang, Chun-yan; Jin, Hang

    2015-01-01

    The genuineness and producing area of Panax notoginseng were studied based on infrared spectroscopy combined with discriminant analysis. The infrared spectra of 136 taproots of P. notoginseng from 13 planting point in 11 counties were collected and the second derivate spectra were calculated by Omnic 8. 0 software. The infrared spectra and their second derivate spectra in the range 1 800 - 700 cm-1 were used to build model by stepwise discriminant analysis, which was in order to distinguish study on the genuineness of P. notoginseng. The model built based on the second derivate spectra showed the better recognition effect for the genuineness of P. notoginseng. The correct rate of returned classification reached to 100%, and the prediction accuracy was 93. 4%. The stability of model was tested by cross validation and the method was performed extrapolation validation. The second derivate spectra combined with the same discriminant analysis method were used to distinguish the producing area of P. notoginseng. The recognition effect of models built based on different range of spectrum and different numbers of samples were compared and found that when the model was built by collecting 8 samples from each planting point as training sample and the spectrum in the range 1 500 - 1 200 cm-1 , the recognition effect was better, with the correct rate of returned classification reached to 99. 0%, and the prediction accuracy was 76. 5%. The results indicated that infrared spectroscopy combined with discriminant analysis showed good recognition effect for the genuineness of P. notoginseng. The method might be a hopeful new method for identification of genuineness of P. notoginseng in practice. The method could recognize the producing area of P. notoginseng to some extent and could be a new thought for identification of the producing area of P. natoginseng.

  6. Design and test of a hybrid foot force sensing and GPS system for richer user mobility activity recognition.

    PubMed

    Zhang, Zelun; Poslad, Stefan

    2013-11-01

    Wearable and accompanied sensors and devices are increasingly being used for user activity recognition. However, typical GPS-based and accelerometer-based (ACC) methods face three main challenges: a low recognition accuracy; a coarse recognition capability, i.e., they cannot recognise both human posture (during travelling) and transportation mode simultaneously, and a relatively high computational complexity. Here, a new GPS and Foot-Force (GPS + FF) sensor method is proposed to overcome these challenges that leverages a set of wearable FF sensors in combination with GPS, e.g., in a mobile phone. User mobility activities that can be recognised include both daily user postures and common transportation modes: sitting, standing, walking, cycling, bus passenger, car passenger (including private cars and taxis) and car driver. The novelty of this work is that our approach provides a more comprehensive recognition capability in terms of reliably recognising both human posture and transportation mode simultaneously during travel. In addition, by comparing the new GPS + FF method with both an ACC method (62% accuracy) and a GPS + ACC based method (70% accuracy) as baseline methods, it obtains a higher accuracy (95%) with less computational complexity, when tested on a dataset obtained from ten individuals.

  7. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  8. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  9. Combining Open-domain and Biomedical Knowledge for Topic Recognition in Consumer Health Questions.

    PubMed

    Mrabet, Yassine; Kilicoglu, Halil; Roberts, Kirk; Demner-Fushman, Dina

    2016-01-01

    Determining the main topics in consumer health questions is a crucial step in their processing as it allows narrowing the search space to a specific semantic context. In this paper we propose a topic recognition approach based on biomedical and open-domain knowledge bases. In the first step of our method, we recognize named entities in consumer health questions using an unsupervised method that relies on a biomedical knowledge base, UMLS, and an open-domain knowledge base, DBpedia. In the next step, we cast topic recognition as a binary classification problem of deciding whether a named entity is the question topic or not. We evaluated our approach on a dataset from the National Library of Medicine (NLM), introduced in this paper, and another from the Genetic and Rare Disease Information Center (GARD). The combination of knowledge bases outperformed the results obtained by individual knowledge bases by up to 16.5% F1 and achieved state-of-the-art performance. Our results demonstrate that combining open-domain knowledge bases with biomedical knowledge bases can lead to a substantial improvement in understanding user-generated health content.

  10. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition.

    PubMed

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition.

  11. Combination of minimum enclosing balls classifier with SVM in coal-rock recognition

    PubMed Central

    Song, QingJun; Jiang, HaiYan; Song, Qinghui; Zhao, XieGuang; Wu, Xiaoxuan

    2017-01-01

    Top-coal caving technology is a productive and efficient method in modern mechanized coal mining, the study of coal-rock recognition is key to realizing automation in comprehensive mechanized coal mining. In this paper we propose a new discriminant analysis framework for coal-rock recognition. In the framework, a data acquisition model with vibration and acoustic signals is designed and the caving dataset with 10 feature variables and three classes is got. And the perfect combination of feature variables can be automatically decided by using the multi-class F-score (MF-Score) feature selection. In terms of nonlinear mapping in real-world optimization problem, an effective minimum enclosing ball (MEB) algorithm plus Support vector machine (SVM) is proposed for rapid detection of coal-rock in the caving process. In particular, we illustrate how to construct MEB-SVM classifier in coal-rock recognition which exhibit inherently complex distribution data. The proposed method is examined on UCI data sets and the caving dataset, and compared with some new excellent SVM classifiers. We conduct experiments with accuracy and Friedman test for comparison of more classifiers over multiple on the UCI data sets. Experimental results demonstrate that the proposed algorithm has good robustness and generalization ability. The results of experiments on the caving dataset show the better performance which leads to a promising feature selection and multi-class recognition in coal-rock recognition. PMID:28937987

  12. Gender recognition from unconstrained and articulated human body.

    PubMed

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition.

  13. Gender Recognition from Unconstrained and Articulated Human Body

    PubMed Central

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, how many body parts are needed to combine together, and what representations are good for articulated body-based gender recognition, is also presented. This paper also pursues data fusion schemes and efficient feature dimensionality reduction based on the partial least squares estimation. Extensive experiments are performed on two unconstrained databases which have not been explored before for gender recognition. PMID:24977203

  14. Gait Recognition Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Sokolova, A.; Konushin, A.

    2017-05-01

    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  15. Hierarchical ensemble of global and local classifiers for face recognition.

    PubMed

    Su, Yu; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2009-08-01

    In the literature of psychophysics and neurophysiology, many studies have shown that both global and local features are crucial for face representation and recognition. This paper proposes a novel face recognition method which exploits both global and local discriminative features. In this method, global features are extracted from the whole face images by keeping the low-frequency coefficients of Fourier transform, which we believe encodes the holistic facial information, such as facial contour. For local feature extraction, Gabor wavelets are exploited considering their biological relevance. After that, Fisher's linear discriminant (FLD) is separately applied to the global Fourier features and each local patch of Gabor features. Thus, multiple FLD classifiers are obtained, each embodying different facial evidences for face recognition. Finally, all these classifiers are combined to form a hierarchical ensemble classifier. We evaluate the proposed method using two large-scale face databases: FERET and FRGC version 2.0. Experiments show that the results of our method are impressively better than the best known results with the same evaluation protocol.

  16. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  17. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  18. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-05-21

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  19. Fusion of smartphone motion sensors for physical activity recognition.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2014-06-10

    For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.

  20. Advanced optical correlation and digital methods for pattern matching—50th anniversary of Vander Lugt matched filter

    NASA Astrophysics Data System (ADS)

    Millán, María S.

    2012-10-01

    On the verge of the 50th anniversary of Vander Lugt’s formulation for pattern matching based on matched filtering and optical correlation, we acknowledge the very intense research activity developed in the field of correlation-based pattern recognition during this period of time. The paper reviews some domains that appeared as emerging fields in the last years of the 20th century and have been developed later on in the 21st century. Such is the case of three-dimensional (3D) object recognition, biometric pattern matching, optical security and hybrid optical-digital processors. 3D object recognition is a challenging case of multidimensional image recognition because of its implications in the recognition of real-world objects independent of their perspective. Biometric recognition is essentially pattern recognition for which the personal identification is based on the authentication of a specific physiological characteristic possessed by the subject (e.g. fingerprint, face, iris, retina, and multifactor combinations). Biometric recognition often appears combined with encryption-decryption processes to secure information. The optical implementations of correlation-based pattern recognition processes still rely on the 4f-correlator, the joint transform correlator, or some of their variants. But the many applications developed in the field have been pushing the systems for a continuous improvement of their architectures and algorithms, thus leading towards merged optical-digital solutions.

  1. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  2. Design and Test of a Hybrid Foot Force Sensing and GPS System for Richer User Mobility Activity Recognition

    PubMed Central

    Zhang, Zelun; Poslad, Stefan

    2013-01-01

    Wearable and accompanied sensors and devices are increasingly being used for user activity recognition. However, typical GPS-based and accelerometer-based (ACC) methods face three main challenges: a low recognition accuracy; a coarse recognition capability, i.e., they cannot recognise both human posture (during travelling) and transportation mode simultaneously, and a relatively high computational complexity. Here, a new GPS and Foot-Force (GPS + FF) sensor method is proposed to overcome these challenges that leverages a set of wearable FF sensors in combination with GPS, e.g., in a mobile phone. User mobility activities that can be recognised include both daily user postures and common transportation modes: sitting, standing, walking, cycling, bus passenger, car passenger (including private cars and taxis) and car driver. The novelty of this work is that our approach provides a more comprehensive recognition capability in terms of reliably recognising both human posture and transportation mode simultaneously during travel. In addition, by comparing the new GPS + FF method with both an ACC method (62% accuracy) and a GPS + ACC based method (70% accuracy) as baseline methods, it obtains a higher accuracy (95%) with less computational complexity, when tested on a dataset obtained from ten individuals. PMID:24189333

  3. Multi-font printed Mongolian document recognition system

    NASA Astrophysics Data System (ADS)

    Peng, Liangrui; Liu, Changsong; Ding, Xiaoqing; Wang, Hua; Jin, Jianming

    2009-01-01

    Mongolian is one of the major ethnic languages in China. Large amount of Mongolian printed documents need to be digitized in digital library and various applications. Traditional Mongolian script has unique writing style and multi-font-type variations, which bring challenges to Mongolian OCR research. As traditional Mongolian script has some characteristics, for example, one character may be part of another character, we define the character set for recognition according to the segmented components, and the components are combined into characters by rule-based post-processing module. For character recognition, a method based on visual directional feature and multi-level classifiers is presented. For character segmentation, a scheme is used to find the segmentation point by analyzing the properties of projection and connected components. As Mongolian has different font-types which are categorized into two major groups, the parameter of segmentation is adjusted for each group. A font-type classification method for the two font-type group is introduced. For recognition of Mongolian text mixed with Chinese and English, language identification and relevant character recognition kernels are integrated. Experiments show that the presented methods are effective. The text recognition rate is 96.9% on the test samples from practical documents with multi-font-types and mixed scripts.

  4. Joint sparse representation for robust multimodal biometrics recognition.

    PubMed

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  5. The Use of Voice Cues for Speaker Gender Recognition in Cochlear Implant Recipients

    ERIC Educational Resources Information Center

    Meister, Hartmut; Fürsen, Katrin; Streicher, Barbara; Lang-Roth, Ruth; Walger, Martin

    2016-01-01

    Purpose: The focus of this study was to examine the influence of fundamental frequency (F0) and vocal tract length (VTL) modifications on speaker gender recognition in cochlear implant (CI) recipients for different stimulus types. Method: Single words and sentences were manipulated using isolated or combined F0 and VTL cues. Using an 11-point…

  6. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  7. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  8. A new method for incoherent combining of far-field laser beams based on multiple faculae recognition

    NASA Astrophysics Data System (ADS)

    Ye, Demao; Li, Sichao; Yan, Zhihui; Zhang, Zenan; Liu, Yuan

    2018-03-01

    Compared to coherent beam combining, incoherent beam combining can complete the output of high power laser beam with high efficiency, simple structure, low cost and high thermal damage resistance, and it is easy to realize in engineering. Higher target power is achieved by incoherent beam combination which using technology of multi-channel optical path correction. However, each channel forms a spot in the far field respectively, which cannot form higher laser power density with low overlap ratio of faculae. In order to improve the combat effectiveness of the system, it is necessary to overlap different faculae that improve the target energy density. Hence, a novel method for incoherent combining of far-field laser beams is present. The method compromises piezoelectric ceramic technology and evaluation algorithm of faculae coincidence degree which based on high precision multi-channel optical path correction. The results show that the faculae recognition algorithm is low-latency(less than 10ms), which can meet the needs of practical engineering. Furthermore, the real time focusing ability of far field faculae is improved which was beneficial to the engineering of high-energy laser weapon or other laser jamming systems.

  9. A robust probabilistic collaborative representation based classification for multimodal biometrics

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  10. Exhibits Recognition System for Combining Online Services and Offline Services

    NASA Astrophysics Data System (ADS)

    Ma, He; Liu, Jianbo; Zhang, Yuan; Wu, Xiaoyu

    2017-10-01

    In order to achieve a more convenient and accurate digital museum navigation, we have developed a real-time and online-to-offline museum exhibits recognition system using image recognition method based on deep learning. In this paper, the client and server of the system are separated and connected through the HTTP. Firstly, by using the client app in the Android mobile phone, the user can take pictures and upload them to the server. Secondly, the features of the picture are extracted using the deep learning network in the server. With the help of the features, the pictures user uploaded are classified with a well-trained SVM. Finally, the classification results are sent to the client and the detailed exhibition’s introduction corresponding to the classification results are shown in the client app. Experimental results demonstrate that the recognition accuracy is close to 100% and the computing time from the image uploading to the exhibit information show is less than 1S. By means of exhibition image recognition algorithm, our implemented exhibits recognition system can combine online detailed exhibition information to the user in the offline exhibition hall so as to achieve better digital navigation.

  11. Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization.

    PubMed

    Dai, Hong-Jie; Lai, Po-Ting; Chang, Yung-Chun; Tsai, Richard Tzong-Han

    2015-01-01

    The functions of chemical compounds and drugs that affect biological processes and their particular effect on the onset and treatment of diseases have attracted increasing interest with the advancement of research in the life sciences. To extract knowledge from the extensive literatures on such compounds and drugs, the organizers of BioCreative IV administered the CHEMical Compound and Drug Named Entity Recognition (CHEMDNER) task to establish a standard dataset for evaluating state-of-the-art chemical entity recognition methods. This study introduces the approach of our CHEMDNER system. Instead of emphasizing the development of novel feature sets for machine learning, this study investigates the effect of various tag schemes on the recognition of the names of chemicals and drugs by using conditional random fields. Experiments were conducted using combinations of different tokenization strategies and tag schemes to investigate the effects of tag set selection and tokenization method on the CHEMDNER task. This study presents the performance of CHEMDNER of three more representative tag schemes-IOBE, IOBES, and IOB12E-when applied to a widely utilized IOB tag set and combined with the coarse-/fine-grained tokenization methods. The experimental results thus reveal that the fine-grained tokenization strategy performance best in terms of precision, recall and F-scores when the IOBES tag set was utilized. The IOBES model with fine-grained tokenization yielded the best-F-scores in the six chemical entity categories other than the "Multiple" entity category. Nonetheless, no significant improvement was observed when a more representative tag schemes was used with the coarse or fine-grained tokenization rules. The best F-scores that were achieved using the developed system on the test dataset of the CHEMDNER task were 0.833 and 0.815 for the chemical documents indexing and the chemical entity mention recognition tasks, respectively. The results herein highlight the importance of tag set selection and the use of different tokenization strategies. Fine-grained tokenization combined with the tag set IOBES most effectively recognizes chemical and drug names. To the best of the authors' knowledge, this investigation is the first comprehensive investigation use of various tag set schemes combined with different tokenization strategies for the recognition of chemical entities.

  12. HPLC fingerprint analysis combined with chemometrics for pattern recognition of ginger.

    PubMed

    Feng, Xu; Kong, Weijun; Wei, Jianhe; Ou-Yang, Zhen; Yang, Meihua

    2014-03-01

    Ginger, the fresh rhizome of Zingiber officinale Rosc. (Zingiberaceae), has been used worldwide; however, for a long time, there has been no standard approbated internationally for its quality control. To establish an efficacious and combinational method and pattern recognition technique for quality control of ginger. A simple, accurate and reliable method based on high-performance liquid chromatography with photodiode array (HPLC-PDA) detection was developed for establishing the chemical fingerprints of 10 batches of ginger from different markets in China. The method was validated in terms of precision, reproducibility and stability; and the relative standard deviations were all less than 1.57%. On the basis of this method, the fingerprints of 10 batches of ginger samples were obtained, which showed 16 common peaks. Coupled with similarity evaluation software, the similarities between each fingerprint of the sample and the simulative mean chromatogram were in the range of 0.998-1.000. Then, the chemometric techniques, including similarity analysis, hierarchical clustering analysis and principal component analysis were applied to classify the ginger samples. Consistent results were obtained to show that ginger samples could be successfully classified into two groups. This study revealed that HPLC-PDA method was simple, sensitive and reliable for fingerprint analysis, and moreover, for pattern recognition and quality control of ginger.

  13. A discriminative method for protein remote homology detection and fold recognition combining Top-n-grams and latent semantic analysis.

    PubMed

    Liu, Bin; Wang, Xiaolong; Lin, Lei; Dong, Qiwen; Wang, Xuan

    2008-12-01

    Protein remote homology detection and fold recognition are central problems in bioinformatics. Currently, discriminative methods based on support vector machine (SVM) are the most effective and accurate methods for solving these problems. A key step to improve the performance of the SVM-based methods is to find a suitable representation of protein sequences. In this paper, a novel building block of proteins called Top-n-grams is presented, which contains the evolutionary information extracted from the protein sequence frequency profiles. The protein sequence frequency profiles are calculated from the multiple sequence alignments outputted by PSI-BLAST and converted into Top-n-grams. The protein sequences are transformed into fixed-dimension feature vectors by the occurrence times of each Top-n-gram. The training vectors are evaluated by SVM to train classifiers which are then used to classify the test protein sequences. We demonstrate that the prediction performance of remote homology detection and fold recognition can be improved by combining Top-n-grams and latent semantic analysis (LSA), which is an efficient feature extraction technique from natural language processing. When tested on superfamily and fold benchmarks, the method combining Top-n-grams and LSA gives significantly better results compared to related methods. The method based on Top-n-grams significantly outperforms the methods based on many other building blocks including N-grams, patterns, motifs and binary profiles. Therefore, Top-n-gram is a good building block of the protein sequences and can be widely used in many tasks of the computational biology, such as the sequence alignment, the prediction of domain boundary, the designation of knowledge-based potentials and the prediction of protein binding sites.

  14. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    PubMed

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  15. Image-based corrosion recognition for ship steel structures

    NASA Astrophysics Data System (ADS)

    Ma, Yucong; Yang, Yang; Yao, Yuan; Li, Shengyuan; Zhao, Xuefeng

    2018-03-01

    Ship structures are subjected to corrosion inevitably in service. Existed image-based methods are influenced by the noises in images because they recognize corrosion by extracting features. In this paper, a novel method of image-based corrosion recognition for ship steel structures is proposed. The method utilizes convolutional neural networks (CNN) and will not be affected by noises in images. A CNN used to recognize corrosion was designed through fine-turning an existing CNN architecture and trained by datasets built using lots of images. Combining the trained CNN classifier with a sliding window technique, the corrosion zone in an image can be recognized.

  16. Infrared target recognition based on improved joint local ternary pattern

    NASA Astrophysics Data System (ADS)

    Sun, Junding; Wu, Xiaosheng

    2016-05-01

    This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.

  17. Emotion Recognition from Chinese Speech for Smart Affective Services Using a Combination of SVM and DBN

    PubMed Central

    Zhu, Lianzhang; Chen, Leiming; Zhao, Dehai

    2017-01-01

    Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. PMID:28737705

  18. Localization and recognition of traffic signs for automated vehicle control systems

    NASA Astrophysics Data System (ADS)

    Zadeh, Mahmoud M.; Kasvand, T.; Suen, Ching Y.

    1998-01-01

    We present a computer vision system for detection and recognition of traffic signs. Such systems are required to assist drivers and for guidance and control of autonomous vehicles on roads and city streets. For experiments we use sequences of digitized photographs and off-line analysis. The system contains four stages. First, region segmentation based on color pixel classification called SRSM. SRSM limits the search to regions of interest in the scene. Second, we use edge tracing to find parts of outer edges of signs which are circular or straight, corresponding to the geometrical shapes of traffic signs. The third step is geometrical analysis of the outer edge and preliminary recognition of each candidate region, which may be a potential traffic sign. The final step in recognition uses color combinations within each region and model matching. This system maybe used for recognition of other types of objects, provided that the geometrical shape and color content remain reasonably constant. The method is reliable, easy to implement, and fast, This differs form the road signs recognition method in the PROMETEUS. The overall structure of the approach is sketched.

  19. Sound quality recognition using optimal wavelet-packet transform and artificial neural network methods

    NASA Astrophysics Data System (ADS)

    Xing, Y. F.; Wang, Y. S.; Shi, L.; Guo, H.; Chen, H.

    2016-01-01

    According to the human perceptional characteristics, a method combined by the optimal wavelet-packet transform and artificial neural network, so-called OWPT-ANN model, for psychoacoustical recognition is presented. Comparisons of time-frequency analysis methods are performed, and an OWPT with 21 critical bands is designed for feature extraction of a sound, as is a three-layer back-propagation ANN for sound quality (SQ) recognition. Focusing on the loudness and sharpness, the OWPT-ANN model is applied on vehicle noises under different working conditions. Experimental verifications show that the OWPT can effectively transfer a sound into a time-varying energy pattern as that in the human auditory system. The errors of loudness and sharpness of vehicle noise from the OWPT-ANN are all less than 5%, which suggest a good accuracy of the OWPT-ANN model in SQ recognition. The proposed methodology might be regarded as a promising technique for signal processing in the human-hearing related fields in engineering.

  20. Improved dense trajectories for action recognition based on random projection and Fisher vectors

    NASA Astrophysics Data System (ADS)

    Ai, Shihui; Lu, Tongwei; Xiong, Yudian

    2018-03-01

    As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.

  1. Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.

    PubMed

    Zhou, Pan; Lin, Zhouchen; Zhang, Chao

    2016-05-01

    Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method.

  2. Combining point context and dynamic time warping for online gesture recognition

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  3. Affect Recognition in Adults with Attention-Deficit/Hyperactivity Disorder

    PubMed Central

    Miller, Meghan; Hanford, Russell B.; Fassbender, Catherine; Duke, Marshall; Schweitzer, Julie B.

    2014-01-01

    Objective This study compared affect recognition abilities between adults with and without Attention-Deficit/Hyperactivity Disorder (ADHD). Method The sample included 51 participants (34 men, 17 women) divided into 3 groups: ADHD-Combined Type (ADHD-C; n = 17), ADHD-Predominantly Inattentive Type (ADHD-I; n = 16), and controls (n = 18). The mean age was 34 years. Affect recognition abilities were assessed by the Diagnostic Analysis of Nonverbal Accuracy (DANVA). Results Analyses of Variance showed that the ADHD-I group made more fearful emotion errors relative to the control group. Inattentive symptoms were positively correlated while hyperactive-impulsive symptoms were negatively correlated with affect recognition errors. Conclusion These results suggest that affect recognition abilities may be impaired in adults with ADHD and that affect recognition abilities are more adversely affected by inattentive than hyperactive-impulsive symptoms. PMID:20555036

  4. Elucidating Mechanisms of Molecular Recognition Between Human Argonaute and miRNA Using Computational Approaches.

    PubMed

    Jiang, Hanlun; Zhu, Lizhe; Héliou, Amélie; Gao, Xin; Bernauer, Julie; Huang, Xuhui

    2017-01-01

    MicroRNA (miRNA) and Argonaute (AGO) protein together form the RNA-induced silencing complex (RISC) that plays an essential role in the regulation of gene expression. Elucidating the underlying mechanism of AGO-miRNA recognition is thus of great importance not only for the in-depth understanding of miRNA function but also for inspiring new drugs targeting miRNAs. In this chapter we introduce a combined computational approach of molecular dynamics (MD) simulations, Markov state models (MSMs), and protein-RNA docking to investigate AGO-miRNA recognition. Constructed from MD simulations, MSMs can elucidate the conformational dynamics of AGO at biologically relevant timescales. Protein-RNA docking can then efficiently identify the AGO conformations that are geometrically accessible to miRNA. Using our recent work on human AGO2 as an example, we explain the rationale and the workflow of our method in details. This combined approach holds great promise to complement experiments in unraveling the mechanisms of molecular recognition between large, flexible, and complex biomolecules.

  5. Appearance-based representative samples refining method for palmprint recognition

    NASA Astrophysics Data System (ADS)

    Wen, Jiajun; Chen, Yan

    2012-07-01

    The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.

  6. Characteristics of speaking style and implications for speech recognition.

    PubMed

    Shinozaki, Takahiro; Ostendorf, Mari; Atlas, Les

    2009-09-01

    Differences in speaking style are associated with more or less spectral variability, as well as different modulation characteristics. The greater variation in some styles (e.g., spontaneous speech and infant-directed speech) poses challenges for recognition but possibly also opportunities for learning more robust models, as evidenced by prior work and motivated by child language acquisition studies. In order to investigate this possibility, this work proposes a new method for characterizing speaking style (the modulation spectrum), examines spontaneous, read, adult-directed, and infant-directed styles in this space, and conducts pilot experiments in style detection and sampling for improved speech recognizer training. Speaking style classification is improved by using the modulation spectrum in combination with standard pitch and energy variation. Speech recognition experiments on a small vocabulary conversational speech recognition task show that sampling methods for training with a small amount of data benefit from the new features.

  7. Hybrid simulated annealing and its application to optimization of hidden Markov models for visual speech recognition.

    PubMed

    Lee, Jong-Seok; Park, Cheol Hoon

    2010-08-01

    We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.

  8. Multi-frame knowledge based text enhancement for mobile phone captured videos

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-02-01

    In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.

  9. Identification and location of catenary insulator in complex background based on machine vision

    NASA Astrophysics Data System (ADS)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  10. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm

    PubMed Central

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-01-01

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen’s temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home. PMID:26007738

  11. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-07-23

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  12. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  13. A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.

    PubMed

    Ar, Ilktan; Akgul, Yusuf Sinan

    2014-11-01

    Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained.

  14. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  15. Combining approaches to on-line handwriting information retrieval

    NASA Astrophysics Data System (ADS)

    Peña Saldarriaga, Sebastián; Viard-Gaudin, Christian; Morin, Emmanuel

    2010-01-01

    In this work, we propose to combine two quite different approaches for retrieving handwritten documents. Our hypothesis is that different retrieval algorithms should retrieve different sets of documents for the same query. Therefore, significant improvements in retrieval performances can be expected. The first approach is based on information retrieval techniques carried out on the noisy texts obtained through handwriting recognition, while the second approach is recognition-free using a word spotting algorithm. Results shows that for texts having a word error rate (WER) lower than 23%, the performances obtained with the combined system are close to the performances obtained on clean digital texts. In addition, for poorly recognized texts (WER > 52%), an improvement of nearly 17% can be observed with respect to the best available baseline method.

  16. Human recognition based on head-shoulder contour extraction and BP neural network

    NASA Astrophysics Data System (ADS)

    Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian

    2014-11-01

    In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.

  17. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  18. Uyghur face recognition method combining 2DDCT with POEM

    NASA Astrophysics Data System (ADS)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  19. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  20. Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses

    PubMed Central

    Kim, Hyun Seok; Park, Kwang Suk

    2017-01-01

    Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots. PMID:29073735

  1. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  2. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  3. Study on recognition algorithm for paper currency numbers based on neural network

    NASA Astrophysics Data System (ADS)

    Li, Xiuyan; Liu, Tiegen; Li, Yuanyao; Zhang, Zhongchuan; Deng, Shichao

    2008-12-01

    Based on the unique characteristic, the paper currency numbers can be put into record and the automatic identification equipment for paper currency numbers is supplied to currency circulation market in order to provide convenience for financial sectors to trace the fiduciary circulation socially and provide effective supervision on paper currency. Simultaneously it is favorable for identifying forged notes, blacklisting the forged notes numbers and solving the major social problems, such as armor cash carrier robbery, money laundering. For the purpose of recognizing the paper currency numbers, a recognition algorithm based on neural network is presented in the paper. Number lines in original paper currency images can be draw out through image processing, such as image de-noising, skew correction, segmentation, and image normalization. According to the different characteristics between digits and letters in serial number, two kinds of classifiers are designed. With the characteristics of associative memory, optimization-compute and rapid convergence, the Discrete Hopfield Neural Network (DHNN) is utilized to recognize the letters; with the characteristics of simple structure, quick learning and global optimum, the Radial-Basis Function Neural Network (RBFNN) is adopted to identify the digits. Then the final recognition results are obtained by combining the two kinds of recognition results in regular sequence. Through the simulation tests, it is confirmed by simulation results that the recognition algorithm of combination of two kinds of recognition methods has such advantages as high recognition rate and faster recognition simultaneously, which is worthy of broad application prospect.

  4. A robust pointer segmentation in biomedical images toward building a visual ontology for biomedical article retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Simpson, Matthew; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-01-01

    Pointers (arrows and symbols) are frequently used in biomedical images to highlight specific image regions of interest (ROIs) that are mentioned in figure captions and/or text discussion. Detection of pointers is the first step toward extracting relevant visual features from ROIs and combining them with textual descriptions for a multimodal (text and image) biomedical article retrieval system. Recently we developed a pointer recognition algorithm based on an edge-based pointer segmentation method, and subsequently reported improvements made on our initial approach involving the use of Active Shape Models (ASM) for pointer recognition and region growing-based method for pointer segmentation. These methods contributed to improving the recall of pointer recognition but not much to the precision. The method discussed in this article is our recent effort to improve the precision rate. Evaluation performed on two datasets and compared with other pointer segmentation methods show significantly improved precision and the highest F1 score.

  5. Chinese wine classification system based on micrograph using combination of shape and structure features

    NASA Astrophysics Data System (ADS)

    Wan, Yi

    2011-06-01

    Chinese wines can be classification or graded by the micrographs. Micrographs of Chinese wines show floccules, stick and granule of variant shape and size. Different wines have variant microstructure and micrographs, we study the classification of Chinese wines based on the micrographs. Shape and structure of wines' particles in microstructure is the most important feature for recognition and classification of wines. So we introduce a feature extraction method which can describe the structure and region shape of micrograph efficiently. First, the micrographs are enhanced using total variation denoising, and segmented using a modified Otsu's method based on the Rayleigh Distribution. Then features are extracted using proposed method in the paper based on area, perimeter and traditional shape feature. Eight kinds total 26 features are selected. Finally, Chinese wine classification system based on micrograph using combination of shape and structure features and BP neural network have been presented. We compare the recognition results for different choices of features (traditional shape features or proposed features). The experimental results show that the better classification rate have been achieved using the combinational features proposed in this paper.

  6. Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.

    PubMed

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.

  7. Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint

    PubMed Central

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065

  8. Vehicle license plate recognition based on geometry restraints and multi-feature decision

    NASA Astrophysics Data System (ADS)

    Wu, Jianwei; Wang, Zongyue

    2005-10-01

    Vehicle license plate (VLP) recognition is of great importance to many traffic applications. Though researchers have paid much attention to VLP recognition there has not been a fully operational VLP recognition system yet for many reasons. This paper discusses a valid and practical method for vehicle license plate recognition based on geometry restraints and multi-feature decision including statistical and structural features. In general, the VLP recognition includes the following steps: the location of VLP, character segmentation, and character recognition. This paper discusses the three steps in detail. The characters of VLP are always declining caused by many factors, which makes it more difficult to recognize the characters of VLP, therefore geometry restraints such as the general ratio of length and width, the adjacent edges being perpendicular are used for incline correction. Image Moment has been proved to be invariant to translation, rotation and scaling therefore image moment is used as one feature for character recognition. Stroke is the basic element for writing and hence taking it as a feature is helpful to character recognition. Finally we take the image moment, the strokes and the numbers of each stroke for each character image and some other structural features and statistical features as the multi-feature to match each character image with sample character images so that each character image can be recognized by BP neural net. The proposed method combines statistical and structural features for VLP recognition, and the result shows its validity and efficiency.

  9. The research of multi-frame target recognition based on laser active imaging

    NASA Astrophysics Data System (ADS)

    Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan

    2013-09-01

    Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.

  10. Validation of five minimally obstructive methods to estimate physical activity energy expenditure in young adults in semi-standardized settings.

    PubMed

    Schneller, Mikkel B; Pedersen, Mogens T; Gupta, Nidhi; Aadahl, Mette; Holtermann, Andreas

    2015-03-13

    We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen participants performed a standardized and semi-standardized protocol including seven daily life activity types, while having their EE measured by indirect calorimetry. Simultaneously, physical activity was quantified by an ActivPAL3, two ActiGraph GT3X+'s and an Actiheart. EE was estimated by the standard ActivPAL3 software (ActivPAL), ActiGraph GT3X+ (ActiGraph) and Actiheart (Actiheart), and by a combination of activity type recognition via Acti4 software and activity counts per minute (CPM) of either a hip- or thigh-worn ActiGraph GT3X+ (AGhip + Acti4 and AGthigh + Acti4). At group level, estimated physical activities EE by Actiheart (MSE = 2.05) and AGthigh + Acti4 (MSE = 0.25) were not significantly different from measured EE by indirect calorimetry, while significantly underestimated by ActiGraph, ActivPAL and AGhip + Acti4. AGthigh + Acti4 and Actiheart explained 77% and 45%, of the individual variations in measured physical activity EE by indirect calorimetry, respectively. This study concludes that combining accelerometer data from a thigh-worn ActiGraph GT3X+ with activity type recognition improved the accuracy of activity specific EE estimation against indirect calorimetry in semi-standardized settings compared to previously validated methods using CPM only.

  11. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    NASA Astrophysics Data System (ADS)

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2014-03-01

    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  12. Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform.

    PubMed

    Xu, Huile; Liu, Jinyi; Hu, Haibo; Zhang, Yi

    2016-12-02

    Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT) or wavelet transform (WT). However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT) for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA) and instantaneous frequency (IF) by means of empirical mode decomposition (EMD), as well as instantaneous energy density (IE) and marginal spectrum (MS) derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works.

  13. Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform

    PubMed Central

    Xu, Huile; Liu, Jinyi; Hu, Haibo; Zhang, Yi

    2016-01-01

    Wearable sensors-based human activity recognition introduces many useful applications and services in health care, rehabilitation training, elderly monitoring and many other areas of human interaction. Existing works in this field mainly focus on recognizing activities by using traditional features extracted from Fourier transform (FT) or wavelet transform (WT). However, these signal processing approaches are suitable for a linear signal but not for a nonlinear signal. In this paper, we investigate the characteristics of the Hilbert-Huang transform (HHT) for dealing with activity data with properties such as nonlinearity and non-stationarity. A multi-features extraction method based on HHT is then proposed to improve the effect of activity recognition. The extracted multi-features include instantaneous amplitude (IA) and instantaneous frequency (IF) by means of empirical mode decomposition (EMD), as well as instantaneous energy density (IE) and marginal spectrum (MS) derived from Hilbert spectral analysis. Experimental studies are performed to verify the proposed approach by using the PAMAP2 dataset from the University of California, Irvine for wearable sensors-based activity recognition. Moreover, the effect of combining multi-features vs. a single-feature are investigated and discussed in the scenario of a dependent subject. The experimental results show that multi-features combination can further improve the performance measures. Finally, we test the effect of multi-features combination in the scenario of an independent subject. Our experimental results show that we achieve four performance indexes: recall, precision, F-measure, and accuracy to 0.9337, 0.9417, 0.9353, and 0.9377 respectively, which are all better than the achievements of related works. PMID:27918414

  14. Classifier dependent feature preprocessing methods

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M., II; Peterson, Gilbert L.

    2008-04-01

    In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.

  15. Continuous Chinese sign language recognition with CNN-LSTM

    NASA Astrophysics Data System (ADS)

    Yang, Su; Zhu, Qing

    2017-07-01

    The goal of sign language recognition (SLR) is to translate the sign language into text, and provide a convenient tool for the communication between the deaf-mute and the ordinary. In this paper, we formulate an appropriate model based on convolutional neural network (CNN) combined with Long Short-Term Memory (LSTM) network, in order to accomplish the continuous recognition work. With the strong ability of CNN, the information of pictures captured from Chinese sign language (CSL) videos can be learned and transformed into vector. Since the video can be regarded as an ordered sequence of frames, LSTM model is employed to connect with the fully-connected layer of CNN. As a recurrent neural network (RNN), it is suitable for sequence learning tasks with the capability of recognizing patterns defined by temporal distance. Compared with traditional RNN, LSTM has performed better on storing and accessing information. We evaluate this method on our self-built dataset including 40 daily vocabularies. The experimental results show that the recognition method with CNN-LSTM can achieve a high recognition rate with small training sets, which will meet the needs of real-time SLR system.

  16. Hierarchical Leak Detection and Localization Method in Natural Gas Pipeline Monitoring Sensor Networks

    PubMed Central

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464

  17. Hierarchical leak detection and localization method in natural gas pipeline monitoring sensor networks.

    PubMed

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.

  18. Adaptive non-local smoothing-based weberface for illumination-insensitive face recognition

    NASA Astrophysics Data System (ADS)

    Yao, Min; Zhu, Changming

    2017-07-01

    Compensating the illumination of a face image is an important process to achieve effective face recognition under severe illumination conditions. This paper present a novel illumination normalization method which specifically considers removing the illumination boundaries as well as reducing the regional illumination. We begin with the analysis of the commonly used reflectance model and then expatiate the hybrid usage of adaptive non-local smoothing and the local information coding based on Weber's law. The effectiveness and advantages of this combination are evidenced visually and experimentally. Results on Extended YaleB database show its better performance than several other famous methods.

  19. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  20. Recognizing stationary and locomotion activities using combinational of spectral analysis with statistical descriptors features

    NASA Astrophysics Data System (ADS)

    Zainudin, M. N. Shah; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran

    2017-10-01

    Prior knowledge in pervasive computing recently garnered a lot of attention due to its high demand in various application domains. Human activity recognition (HAR) considered as the applications that are widely explored by the expertise that provides valuable information to the human. Accelerometer sensor-based approach is utilized as devices to undergo the research in HAR since their small in size and this sensor already build-in in the various type of smartphones. However, the existence of high inter-class similarities among the class tends to degrade the recognition performance. Hence, this work presents the method for activity recognition using our proposed features from combinational of spectral analysis with statistical descriptors that able to tackle the issue of differentiating stationary and locomotion activities. The noise signal is filtered using Fourier Transform before it will be extracted using two different groups of features, spectral frequency analysis, and statistical descriptors. Extracted signal later will be classified using random forest ensemble classifier models. The recognition results show the good accuracy performance for stationary and locomotion activities based on USC HAD datasets.

  1. Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding

    PubMed Central

    Li, Xin; Guo, Rui; Chen, Chao

    2014-01-01

    Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216

  2. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  3. Deep neural network for traffic sign recognition systems: An analysis of spatial transformers and stochastic optimisation methods.

    PubMed

    Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M

    2018-03-01

    This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A Kinect based sign language recognition system using spatio-temporal features

    NASA Astrophysics Data System (ADS)

    Memiş, Abbas; Albayrak, Songül

    2013-12-01

    This paper presents a sign language recognition system that uses spatio-temporal features on RGB video images and depth maps for dynamic gestures of Turkish Sign Language. Proposed system uses motion differences and accumulation approach for temporal gesture analysis. Motion accumulation method, which is an effective method for temporal domain analysis of gestures, produces an accumulated motion image by combining differences of successive video frames. Then, 2D Discrete Cosine Transform (DCT) is applied to accumulated motion images and temporal domain features transformed into spatial domain. These processes are performed on both RGB images and depth maps separately. DCT coefficients that represent sign gestures are picked up via zigzag scanning and feature vectors are generated. In order to recognize sign gestures, K-Nearest Neighbor classifier with Manhattan distance is performed. Performance of the proposed sign language recognition system is evaluated on a sign database that contains 1002 isolated dynamic signs belongs to 111 words of Turkish Sign Language (TSL) in three different categories. Proposed sign language recognition system has promising success rates.

  5. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  6. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  7. Computerized system for recognition of autism on the basis of gene expression microarray data.

    PubMed

    Latkowski, Tomasz; Osowski, Stanislaw

    2015-01-01

    The aim of this paper is to provide a means to recognize a case of autism using gene expression microarrays. The crucial task is to discover the most important genes which are strictly associated with autism. The paper presents an application of different methods of gene selection, to select the most representative input attributes for an ensemble of classifiers. The set of classifiers is responsible for distinguishing autism data from the reference class. Simultaneous application of a few gene selection methods enables analysis of the ill-conditioned gene expression matrix from different points of view. The results of selection combined with a genetic algorithm and SVM classifier have shown increased accuracy of autism recognition. Early recognition of autism is extremely important for treatment of children and increases the probability of their recovery and return to normal social communication. The results of this research can find practical application in early recognition of autism on the basis of gene expression microarray analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Learning to recognize rat social behavior: Novel dataset and cross-dataset application.

    PubMed

    Lorbach, Malte; Kyriakou, Elisavet I; Poppe, Ronald; van Dam, Elsbeth A; Noldus, Lucas P J J; Veltkamp, Remco C

    2018-04-15

    Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Morphological self-organizing feature map neural network with applications to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  10. Analysis of using EMG and mechanical sensors to enhance intent recognition in powered lower limb prostheses

    NASA Astrophysics Data System (ADS)

    Young, A. J.; Kuiken, T. A.; Hargrove, L. J.

    2014-10-01

    Objective. The purpose of this study was to determine the contribution of electromyography (EMG) data, in combination with a diverse array of mechanical sensors, to locomotion mode intent recognition in transfemoral amputees using powered prostheses. Additionally, we determined the effect of adding time history information using a dynamic Bayesian network (DBN) for both the mechanical and EMG sensors. Approach. EMG signals from the residual limbs of amputees have been proposed to enhance pattern recognition-based intent recognition systems for powered lower limb prostheses, but mechanical sensors on the prosthesis—such as inertial measurement units, position and velocity sensors, and load cells—may be just as useful. EMG and mechanical sensor data were collected from 8 transfemoral amputees using a powered knee/ankle prosthesis over basic locomotion modes such as walking, slopes and stairs. An offline study was conducted to determine the benefit of different sensor sets for predicting intent. Main results. EMG information was not as accurate alone as mechanical sensor information (p < 0.05) for any classification strategy. However, EMG in combination with the mechanical sensor data did significantly reduce intent recognition errors (p < 0.05) both for transitions between locomotion modes and steady-state locomotion. The sensor time history (DBN) classifier significantly reduced error rates compared to a linear discriminant classifier for steady-state steps, without increasing the transitional error, for both EMG and mechanical sensors. Combining EMG and mechanical sensor data with sensor time history reduced the average transitional error from 18.4% to 12.2% and the average steady-state error from 3.8% to 1.0% when classifying level-ground walking, ramps, and stairs in eight transfemoral amputee subjects. Significance. These results suggest that a neural interface in combination with time history methods for locomotion mode classification can enhance intent recognition performance; this strategy should be considered for future real-time experiments.

  11. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  12. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  13. Spoofing detection on facial images recognition using LBP and GLCM combination

    NASA Astrophysics Data System (ADS)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  14. Human Activity Recognition by Combining a Small Number of Classifiers.

    PubMed

    Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin

    2016-09-01

    We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.

  15. Signal processing method and system for noise removal and signal extraction

    DOEpatents

    Fu, Chi Yung; Petrich, Loren

    2009-04-14

    A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.

  16. The writer independent online handwriting recognition system frog on hand and cluster generative statistical dynamic time warping.

    PubMed

    Bahlmann, Claus; Burkhardt, Hans

    2004-03-01

    In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device.

  17. New Fault Recognition Method for Rotary Machinery Based on Information Entropy and a Probabilistic Neural Network.

    PubMed

    Jiang, Quansheng; Shen, Yehu; Li, Hua; Xu, Fengyu

    2018-01-24

    Feature recognition and fault diagnosis plays an important role in equipment safety and stable operation of rotating machinery. In order to cope with the complexity problem of the vibration signal of rotating machinery, a feature fusion model based on information entropy and probabilistic neural network is proposed in this paper. The new method first uses information entropy theory to extract three kinds of characteristics entropy in vibration signals, namely, singular spectrum entropy, power spectrum entropy, and approximate entropy. Then the feature fusion model is constructed to classify and diagnose the fault signals. The proposed approach can combine comprehensive information from different aspects and is more sensitive to the fault features. The experimental results on simulated fault signals verified better performances of our proposed approach. In real two-span rotor data, the fault detection accuracy of the new method is more than 10% higher compared with the methods using three kinds of information entropy separately. The new approach is proved to be an effective fault recognition method for rotating machinery.

  18. Validation of green-solvent extraction combined with chromatographic chemical fingerprint to evaluate quality of Stevia rebaudiana Bertoni.

    PubMed

    Teo, Chin Chye; Tan, Swee Ngin; Yong, Jean Wan Hong; Hew, Choy Sin; Ong, Eng Shi

    2009-02-01

    An approach that combined green-solvent methods of extraction with chromatographic chemical fingerprint and pattern recognition tools such as principal component analysis (PCA) was used to evaluate the quality of medicinal plants. Pressurized hot water extraction (PHWE) and microwave-assisted extraction (MAE) were used and their extraction efficiencies to extract two bioactive compounds, namely stevioside (SV) and rebaudioside A (RA), from Stevia rebaudiana Bertoni (SB) under different cultivation conditions were compared. The proposed methods showed that SV and RA could be extracted from SB using pure water under optimized conditions. The extraction efficiency of the methods was observed to be higher or comparable to heating under reflux with water. The method precision (RSD, n = 6) was found to vary from 1.91 to 2.86% for the two different methods on different days. Compared to PHWE, MAE has higher extraction efficiency with shorter extraction time. MAE was also found to extract more chemical constituents and provide distinctive chemical fingerprints for quality control purposes. Thus, a combination of MAE with chromatographic chemical fingerprints and PCA provided a simple and rapid approach for the comparison and classification of medicinal plants from different growth conditions. Hence, the current work highlighted the importance of extraction method in chemical fingerprinting for the classification of medicinal plants from different cultivation conditions with the aid of pattern recognition tools used.

  19. Seamless Tracing of Human Behavior Using Complementary Wearable and House-Embedded Sensors

    PubMed Central

    Augustyniak, Piotr; Smoleń, Magdalena; Mikrut, Zbigniew; Kańtoch, Eliasz

    2014-01-01

    This paper presents a multimodal system for seamless surveillance of elderly people in their living environment. The system uses simultaneously a wearable sensor network for each individual and premise-embedded sensors specific for each environment. The paper demonstrates the benefits of using complementary information from two types of mobility sensors: visual flow-based image analysis and an accelerometer-based wearable network. The paper provides results for indoor recognition of several elementary poses and outdoor recognition of complex movements. Instead of complete system description, particular attention was drawn to a polar histogram-based method of visual pose recognition, complementary use and synchronization of the data from wearable and premise-embedded networks and an automatic danger detection algorithm driven by two premise- and subject-related databases. The novelty of our approach also consists in feeding the databases with real-life recordings from the subject, and in using the dynamic time-warping algorithm for measurements of distance between actions represented as elementary poses in behavioral records. The main results of testing our method include: 95.5% accuracy of elementary pose recognition by the video system, 96.7% accuracy of elementary pose recognition by the accelerometer-based system, 98.9% accuracy of elementary pose recognition by the combined accelerometer and video-based system, and 80% accuracy of complex outdoor activity recognition by the accelerometer-based wearable system. PMID:24787640

  20. Attention-Based Recurrent Temporal Restricted Boltzmann Machine for Radar High Resolution Range Profile Sequence Recognition.

    PubMed

    Zhang, Yifan; Gao, Xunzhang; Peng, Xuan; Ye, Jiaqi; Li, Xiang

    2018-05-16

    The High Resolution Range Profile (HRRP) recognition has attracted great concern in the field of Radar Automatic Target Recognition (RATR). However, traditional HRRP recognition methods failed to model high dimensional sequential data efficiently and have a poor anti-noise ability. To deal with these problems, a novel stochastic neural network model named Attention-based Recurrent Temporal Restricted Boltzmann Machine (ARTRBM) is proposed in this paper. RTRBM is utilized to extract discriminative features and the attention mechanism is adopted to select major features. RTRBM is efficient to model high dimensional HRRP sequences because it can extract the information of temporal and spatial correlation between adjacent HRRPs. The attention mechanism is used in sequential data recognition tasks including machine translation and relation classification, which makes the model pay more attention to the major features of recognition. Therefore, the combination of RTRBM and the attention mechanism makes our model effective for extracting more internal related features and choose the important parts of the extracted features. Additionally, the model performs well with the noise corrupted HRRP data. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset show that our proposed model outperforms other traditional methods, which indicates that ARTRBM extracts, selects, and utilizes the correlation information between adjacent HRRPs effectively and is suitable for high dimensional data or noise corrupted data.

  1. Recognition of medication information from discharge summaries using ensembles of classifiers.

    PubMed

    Doan, Son; Collier, Nigel; Xu, Hua; Pham, Hoang Duy; Tu, Minh Phuong

    2012-05-07

    Extraction of clinical information such as medications or problems from clinical text is an important task of clinical natural language processing (NLP). Rule-based methods are often used in clinical NLP systems because they are easy to adapt and customize. Recently, supervised machine learning methods have proven to be effective in clinical NLP as well. However, combining different classifiers to further improve the performance of clinical entity recognition systems has not been investigated extensively. Combining classifiers into an ensemble classifier presents both challenges and opportunities to improve performance in such NLP tasks. We investigated ensemble classifiers that used different voting strategies to combine outputs from three individual classifiers: a rule-based system, a support vector machine (SVM) based system, and a conditional random field (CRF) based system. Three voting methods were proposed and evaluated using the annotated data sets from the 2009 i2b2 NLP challenge: simple majority, local SVM-based voting, and local CRF-based voting. Evaluation on 268 manually annotated discharge summaries from the i2b2 challenge showed that the local CRF-based voting method achieved the best F-score of 90.84% (94.11% Precision, 87.81% Recall) for 10-fold cross-validation. We then compared our systems with the first-ranked system in the challenge by using the same training and test sets. Our system based on majority voting achieved a better F-score of 89.65% (93.91% Precision, 85.76% Recall) than the previously reported F-score of 89.19% (93.78% Precision, 85.03% Recall) by the first-ranked system in the challenge. Our experimental results using the 2009 i2b2 challenge datasets showed that ensemble classifiers that combine individual classifiers into a voting system could achieve better performance than a single classifier in recognizing medication information from clinical text. It suggests that simple strategies that can be easily implemented such as majority voting could have the potential to significantly improve clinical entity recognition.

  2. Using online handwriting and audio streams for mathematical expressions recognition: a bimodal approach

    NASA Astrophysics Data System (ADS)

    Medjkoune, Sofiane; Mouchère, Harold; Petitrenaud, Simon; Viard-Gaudin, Christian

    2013-01-01

    The work reported in this paper concerns the problem of mathematical expressions recognition. This task is known to be a very hard one. We propose to alleviate the difficulties by taking into account two complementary modalities. The modalities referred to are handwriting and audio ones. To combine the signals coming from both modalities, various fusion methods are explored. Performances evaluated on the HAMEX dataset show a significant improvement compared to a single modality (handwriting) based system.

  3. An adaptive deep Q-learning strategy for handwritten digit recognition.

    PubMed

    Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min

    2018-02-22

    Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Color constancy in 3D-2D face recognition

    NASA Astrophysics Data System (ADS)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  5. Weighted score-level feature fusion based on Dempster-Shafer evidence theory for action recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Guoliang; Jia, Songmin; Li, Xiuzhi; Zhang, Xiangyin

    2018-01-01

    The majority of human action recognition methods use multifeature fusion strategy to improve the classification performance, where the contribution of different features for specific action has not been paid enough attention. We present an extendible and universal weighted score-level feature fusion method using the Dempster-Shafer (DS) evidence theory based on the pipeline of bag-of-visual-words. First, the partially distinctive samples in the training set are selected to construct the validation set. Then, local spatiotemporal features and pose features are extracted from these samples to obtain evidence information. The DS evidence theory and the proposed rule of survival of the fittest are employed to achieve evidence combination and calculate optimal weight vectors of every feature type belonging to each action class. Finally, the recognition results are deduced via the weighted summation strategy. The performance of the established recognition framework is evaluated on Penn Action dataset and a subset of the joint-annotated human metabolome database (sub-JHMDB). The experiment results demonstrate that the proposed feature fusion method can adequately exploit the complementarity among multiple features and improve upon most of the state-of-the-art algorithms on Penn Action and sub-JHMDB datasets.

  6. Integrating conventional and inverse representation for face recognition.

    PubMed

    Xu, Yong; Li, Xuelong; Yang, Jian; Lai, Zhihui; Zhang, David

    2014-10-01

    Representation-based classification methods are all constructed on the basis of the conventional representation, which first expresses the test sample as a linear combination of the training samples and then exploits the deviation between the test sample and the expression result of every class to perform classification. However, this deviation does not always well reflect the difference between the test sample and each class. With this paper, we propose a novel representation-based classification method for face recognition. This method integrates conventional and the inverse representation-based classification for better recognizing the face. It first produces conventional representation of the test sample, i.e., uses a linear combination of the training samples to represent the test sample. Then it obtains the inverse representation, i.e., provides an approximation representation of each training sample of a subject by exploiting the test sample and training samples of the other subjects. Finally, the proposed method exploits the conventional and inverse representation to generate two kinds of scores of the test sample with respect to each class and combines them to recognize the face. The paper shows the theoretical foundation and rationale of the proposed method. Moreover, this paper for the first time shows that a basic nature of the human face, i.e., the symmetry of the face can be exploited to generate new training and test samples. As these new samples really reflect some possible appearance of the face, the use of them will enable us to obtain higher accuracy. The experiments show that the proposed conventional and inverse representation-based linear regression classification (CIRLRC), an improvement to linear regression classification (LRC), can obtain very high accuracy and greatly outperforms the naive LRC and other state-of-the-art conventional representation based face recognition methods. The accuracy of CIRLRC can be 10% greater than that of LRC.

  7. Object and event recognition for stroke rehabilitation

    NASA Astrophysics Data System (ADS)

    Ghali, Ahmed; Cunningham, Andrew S.; Pridmore, Tony P.

    2003-06-01

    Stroke is a major cause of disability and health care expenditure around the world. Existing stroke rehabilitation methods can be effective but are costly and need to be improved. Even modest improvements in the effectiveness of rehabilitation techniques could produce large benefits in terms of quality of life. The work reported here is part of an ongoing effort to integrate virtual reality and machine vision technologies to produce innovative stroke rehabilitation methods. We describe a combined object recognition and event detection system that provides real time feedback to stroke patients performing everyday kitchen tasks necessary for independent living, e.g. making a cup of coffee. The image plane position of each object, including the patient"s hand, is monitored using histogram-based recognition methods. The relative positions of hand and objects are then reported to a task monitor that compares the patient"s actions against a model of the target task. A prototype system has been constructed and is currently undergoing technical and clinical evaluation.

  8. The location and recognition of anti-counterfeiting code image with complex background

    NASA Astrophysics Data System (ADS)

    Ni, Jing; Liu, Quan; Lou, Ping; Han, Ping

    2017-07-01

    The order of cigarette market is a key issue in the tobacco business system. The anti-counterfeiting code, as a kind of effective anti-counterfeiting technology, can identify counterfeit goods, and effectively maintain the normal order of market and consumers' rights and interests. There are complex backgrounds, light interference and other problems in the anti-counterfeiting code images obtained by the tobacco recognizer. To solve these problems, the paper proposes a locating method based on Susan operator, combined with sliding window and line scanning,. In order to reduce the interference of background and noise, we extract the red component of the image and convert the color image into gray image. For the confusing characters, recognition results correction based on the template matching method has been adopted to improve the recognition rate. In this method, the anti-counterfeiting code can be located and recognized correctly in the image with complex background. The experiment results show the effectiveness and feasibility of the approach.

  9. Can we recognize horses by their ocular biometric traits using deep convolutional neural networks?

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Szadkowski, Mateusz

    2017-08-01

    This paper aims at determining the viability of horse recognition by the means of ocular biometrics and deep convolutional neural networks (deep CNNs). Fast and accurate identification of race horses before racing is crucial for ensuring that exactly the horses that were declared are participating, using methods that are non-invasive and friendly to these delicate animals. As typical iris recognition methods require lot of fine-tuning of the method parameters and high-quality data, CNNs seem like a natural candidate to be applied for recognition thanks to their potentially excellent abilities in describing texture, combined with ease of implementation in an end-to-end manner. Also, with such approach we can easily utilize both iris and periocular features without constructing complicated algorithms for each. We thus present a simple CNN classifier, able to correctly identify almost 80% of the samples in an identification scenario, and give equal error rate (EER) of less than 10% in a verification scenario.

  10. [A new peak detection algorithm of Raman spectra].

    PubMed

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  11. Autonomous space target recognition and tracking approach using star sensors based on a Kalman filter.

    PubMed

    Ye, Tao; Zhou, Fuqiang

    2015-04-10

    When imaged by detectors, space targets (including satellites and debris) and background stars have similar point-spread functions, and both objects appear to change as detectors track targets. Therefore, traditional tracking methods cannot separate targets from stars and cannot directly recognize targets in 2D images. Consequently, we propose an autonomous space target recognition and tracking approach using a star sensor technique and a Kalman filter (KF). A two-step method for subpixel-scale detection of star objects (including stars and targets) is developed, and the combination of the star sensor technique and a KF is used to track targets. The experimental results show that the proposed method is adequate for autonomously recognizing and tracking space targets.

  12. A fingerprint classification algorithm based on combination of local and global information

    NASA Astrophysics Data System (ADS)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  13. An Intelligent Systems Approach to Automated Object Recognition: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.; Swadley, Casey L.

    2002-01-01

    Attempts at fully automated object recognition systems have met with varying levels of success over the years. However, none of the systems have achieved high enough accuracy rates to be run unattended. One of the reasons for this may be that they are designed from the computer's point of view and rely mainly on image-processing methods. A better solution to this problem may be to make use of modern advances in computational intelligence and distributed processing to try to mimic how the human brain is thought to recognize objects. As humans combine cognitive processes with detection techniques, such a system would combine traditional image-processing techniques with computer-based intelligence to determine the identity of various objects in a scene.

  14. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks.

    PubMed

    Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin

    2016-01-01

    The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V.Database URL: http://219.223.252.210:8080/SS/cdr.html. © The Author(s) 2016. Published by Oxford University Press.

  15. A probabilistic union model with automatic order selection for noisy speech recognition.

    PubMed

    Jancovic, P; Ming, J

    2001-09-01

    A critical issue in exploiting the potential of the sub-band-based approach to robust speech recognition is the method of combining the sub-band observations, for selecting the bands unaffected by noise. A new method for this purpose, i.e., the probabilistic union model, was recently introduced. This model has been shown to be capable of dealing with band-limited corruption, requiring no knowledge about the band position and statistical distribution of the noise. A parameter within the model, which we call its order, gives the best results when it equals the number of noisy bands. Since this information may not be available in practice, in this paper we introduce an automatic algorithm for selecting the order, based on the state duration pattern generated by the hidden Markov model (HMM). The algorithm has been tested on the TIDIGITS database corrupted by various types of additive band-limited noise with unknown noisy bands. The results have shown that the union model equipped with the new algorithm can achieve a recognition performance similar to that achieved when the number of noisy bands is known. The results show a very significant improvement over the traditional full-band model, without requiring prior information on either the position or the number of noisy bands. The principle of the algorithm for selecting the order based on state duration may also be applied to other sub-band combination methods.

  16. Con-Text: Text Detection for Fine-grained Object Classification.

    PubMed

    Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo

    2017-05-24

    This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).

  17. Combining color and shape information for illumination-viewpoint invariant object recognition.

    PubMed

    Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis

    2006-01-01

    In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.

  18. Toward noncooperative iris recognition: a classification approach using multiple signatures.

    PubMed

    Proença, Hugo; Alexandre, Luís A

    2007-04-01

    This paper focuses on noncooperative iris recognition, i.e., the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. This increases the probability of capturing very heterogeneous images (regarding focus, contrast, or brightness) and with several noise factors (iris obstructions and reflections). Current iris recognition systems are unable to deal with noisy data and substantially increase their error rates, especially the false rejections, in these conditions. We propose an iris classification method that divides the segmented and normalized iris image into six regions, makes an independent feature extraction and comparison for each region, and combines each of the dissimilarity values through a classification rule. Experiments show a substantial decrease, higher than 40 percent, of the false rejection rates in the recognition of noisy iris images.

  19. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  20. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  1. Teaching Beginning Reading Sounds to Head Start Youngsters.

    ERIC Educational Resources Information Center

    Von Hilsheimer, Catherine

    The effectiveness of the Three Owls Reading Method was tested, using 53 children from a Head Start program. The Three Owls system combines sight, phonic, and linguistic reading methods with movement and touch modes of instruction. After a pretest for phonic letter recognition, the students were matched and assigned to one of two classes in a…

  2. New approach for segmentation and recognition of handwritten numeral strings

    NASA Astrophysics Data System (ADS)

    Sadri, Javad; Suen, Ching Y.; Bui, Tien D.

    2004-12-01

    In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.

  3. New approach for segmentation and recognition of handwritten numeral strings

    NASA Astrophysics Data System (ADS)

    Sadri, Javad; Suen, Ching Y.; Bui, Tien D.

    2005-01-01

    In this paper, we propose a new system for segmentation and recognition of unconstrained handwritten numeral strings. The system uses a combination of foreground and background features for segmentation of touching digits. The method introduces new algorithms for traversing the top/bottom-foreground-skeletons of the touched digits, and for finding feature points on these skeletons, and matching them to build all the segmentation paths. For the first time a genetic representation is used to show all the segmentation hypotheses. Our genetic algorithm tries to search and evolve the population of candidate segmentations and finds the one with the highest confidence for its segmentation and recognition. We have also used a new method for feature extraction which lowers the variations in the shapes of the digits, and then a MLP neural network is utilized to produce the labels and confidence values for those digits. The NIST SD19 and CENPARMI databases are used for evaluating the system. Our system can get a correct segmentation-recognition rate of 96.07% with rejection rate of 2.61% which compares favorably with those that exist in the literature.

  4. Generation, recognition, and consistent fusion of partial boundary representations from range images

    NASA Astrophysics Data System (ADS)

    Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang

    1994-10-01

    This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.

  5. Sparse coding joint decision rule for ear print recognition

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Melaab, Djamel; Mekhalfi, Mohamed Lamine

    2016-09-01

    Human ear recognition has been promoted as a profitable biometric over the past few years. With respect to other modalities, such as the face and iris, that have undergone a significant investigation in the literature, ear pattern is relatively still uncommon. We put forth a sparse coding-induced decision-making for ear recognition. It jointly involves the reconstruction residuals and the respective reconstruction coefficients pertaining to the input features (co-occurrence of adjacent local binary patterns) for a further fusion. We particularly show that combining both components (i.e., the residuals as well as the coefficients) yields better outcomes than the case when either of them is deemed singly. The proposed method has been evaluated on two benchmark datasets, namely IITD1 (125 subject) and IITD2 (221 subjects). The recognition rates of the suggested scheme amount for 99.5% and 98.95% for both datasets, respectively, which suggest that our method decently stands out against reference state-of-the-art methodologies. Furthermore, experiments conclude that the presented scheme manifests a promising robustness under large-scale occlusion scenarios.

  6. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  7. Combination of dynamic Bayesian network classifiers for the recognition of degraded characters

    NASA Astrophysics Data System (ADS)

    Likforman-Sulem, Laurence; Sigelle, Marc

    2009-01-01

    We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.

  8. Support Vector Machine-based classification of protein folds using the structural properties of amino acid residues and amino acid residue pairs.

    PubMed

    Shamim, Mohammad Tabrez Anwar; Anwaruddin, Mohammad; Nagarajaram, H A

    2007-12-15

    Fold recognition is a key step in the protein structure discovery process, especially when traditional sequence comparison methods fail to yield convincing structural homologies. Although many methods have been developed for protein fold recognition, their accuracies remain low. This can be attributed to insufficient exploitation of fold discriminatory features. We have developed a new method for protein fold recognition using structural information of amino acid residues and amino acid residue pairs. Since protein fold recognition can be treated as a protein fold classification problem, we have developed a Support Vector Machine (SVM) based classifier approach that uses secondary structural state and solvent accessibility state frequencies of amino acids and amino acid pairs as feature vectors. Among the individual properties examined secondary structural state frequencies of amino acids gave an overall accuracy of 65.2% for fold discrimination, which is better than the accuracy by any method reported so far in the literature. Combination of secondary structural state frequencies with solvent accessibility state frequencies of amino acids and amino acid pairs further improved the fold discrimination accuracy to more than 70%, which is approximately 8% higher than the best available method. In this study we have also tested, for the first time, an all-together multi-class method known as Crammer and Singer method for protein fold classification. Our studies reveal that the three multi-class classification methods, namely one versus all, one versus one and Crammer and Singer method, yield similar predictions. Dataset and stand-alone program are available upon request.

  9. Speaker Recognition by Combining MFCC and Phase Information in Noisy Conditions

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Minami, Kazue; Yamamoto, Kazumasa; Nakagawa, Seiichi

    In this paper, we investigate the effectiveness of phase for speaker recognition in noisy conditions and combine the phase information with mel-frequency cepstral coefficients (MFCCs). To date, almost speaker recognition methods are based on MFCCs even in noisy conditions. For MFCCs which dominantly capture vocal tract information, only the magnitude of the Fourier Transform of time-domain speech frames is used and phase information has been ignored. High complement of the phase information and MFCCs is expected because the phase information includes rich voice source information. Furthermore, some researches have reported that phase based feature was robust to noise. In our previous study, a phase information extraction method that normalizes the change variation in the phase depending on the clipping position of the input speech was proposed, and the performance of the combination of the phase information and MFCCs was remarkably better than that of MFCCs. In this paper, we evaluate the robustness of the proposed phase information for speaker identification in noisy conditions. Spectral subtraction, a method skipping frames with low energy/Signal-to-Noise (SN) and noisy speech training models are used to analyze the effect of the phase information and MFCCs in noisy conditions. The NTT database and the JNAS (Japanese Newspaper Article Sentences) database added with stationary/non-stationary noise were used to evaluate our proposed method. MFCCs outperformed the phase information for clean speech. On the other hand, the degradation of the phase information was significantly smaller than that of MFCCs for noisy speech. The individual result of the phase information was even better than that of MFCCs in many cases by clean speech training models. By deleting unreliable frames (frames having low energy/SN), the speaker identification performance was improved significantly. By integrating the phase information with MFCCs, the speaker identification error reduction rate was about 30%-60% compared with the standard MFCC-based method.

  10. Word Spotting and Recognition with Embedded Attributes.

    PubMed

    Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest

    2014-12-01

    This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.

  11. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    NASA Astrophysics Data System (ADS)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  12. Predicting tool life in turning operations using neural networks and image processing

    NASA Astrophysics Data System (ADS)

    Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D.

    2018-05-01

    A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.

  13. Hierarchically Structured Non-Intrusive Sign Language Recognition. Chapter 2

    NASA Technical Reports Server (NTRS)

    Zieren, Jorg; Zieren, Jorg; Kraiss, Karl-Friedrich

    2007-01-01

    This work presents a hierarchically structured approach at the nonintrusive recognition of sign language from a monocular frontal view. Robustness is achieved through sophisticated localization and tracking methods, including a combined EM/CAMSHIFT overlap resolution procedure and the parallel pursuit of multiple hypotheses about hands position and movement. This allows handling of ambiguities and automatically corrects tracking errors. A biomechanical skeleton model and dynamic motion prediction using Kalman filters represents high level knowledge. Classification is performed by Hidden Markov Models. 152 signs from German sign language were recognized with an accuracy of 97.6%.

  14. Author name recognition in degraded journal images

    NASA Astrophysics Data System (ADS)

    de Bodard de la Jacopière, Aliette; Likforman-Sulem, Laurence

    2006-01-01

    A method for extracting names in degraded documents is presented in this article. The documents targeted are images of photocopied scientific journals from various scientific domains. Due to the degradation, there is poor OCR recognition, and pieces of other articles appear on the sides of the image. The proposed approach relies on the combination of a low-level textual analysis and an image-based analysis. The textual analysis extracts robust typographic features, while the image analysis selects image regions of interest through anchor components. We report results on the University of Washington benchmark database.

  15. Automatic event recognition and anomaly detection with attribute grammar by learning scene semantics

    NASA Astrophysics Data System (ADS)

    Qi, Lin; Yao, Zhenyu; Li, Li; Dong, Junyu

    2007-11-01

    In this paper we present a novel framework for automatic event recognition and abnormal behavior detection with attribute grammar by learning scene semantics. This framework combines learning scene semantics by trajectory analysis and constructing attribute grammar-based event representation. The scene and event information is learned automatically. Abnormal behaviors that disobey scene semantics or event grammars rules are detected. By this method, an approach to understanding video scenes is achieved. Further more, with this prior knowledge, the accuracy of abnormal event detection is increased.

  16. Authentication of Whey Protein Powders by Portable Mid-Infrared Spectrometers Combined with Pattern Recognition Analysis.

    PubMed

    Wang, Ting; Tan, Siow Ying; Mutilangi, William; Aykas, Didem P; Rodriguez-Saona, Luis E

    2015-10-01

    The objective of this study was to develop a simple and rapid method to differentiate whey protein types (WPC, WPI, and WPH) used for beverage manufacturing by combining the spectral signature collected from portable mid-infrared spectrometers and pattern recognition analysis. Whey protein powders from different suppliers are produced using a large number of processing and compositional variables, resulting in variation in composition, concentration, protein structure, and thus functionality. Whey protein powders including whey protein isolates, whey protein concentrates and whey protein hydrolysates were obtained from different suppliers and their spectra collected using portable mid-infrared spectrometers (single and triple reflection) by pressing the powder onto an Attenuated Total Reflectance (ATR) diamond crystal with a pressure clamp. Spectra were analyzed by soft independent modeling of class analogy (SIMCA) generating a classification model showing the ability to differentiate whey protein types by forming tight clusters with interclass distance values of >3, considered to be significantly different from each other. The major bands centered at 1640 and 1580 cm(-1) were responsible for separation and were associated with differences in amide I and amide II vibrations of proteins, respectively. Another important band in whey protein clustering was associated with carboxylate vibrations of acidic amino acids (∼1570 cm(-1)). The use of a portable mid-IR spectrometer combined with pattern recognition analysis showed potential for discriminating whey protein ingredients that can help to streamline the analytical procedure so that it is more applicable for field-based screening of ingredients. A rapid, simple and accurate method was developed to authenticate commercial whey protein products by using portable mid-infrared spectrometers combined with chemometrics, which could help ensure the functionality of whey protein ingredients in food applications. © 2015 Institute of Food Technologists®

  17. Augmented reality system

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng

    2010-08-01

    In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.

  18. Medical image segmentation by combining graph cuts and oriented active appearance models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.

  19. Monkey’s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices

    PubMed Central

    Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.

    2016-01-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975

  20. Learning discriminative features from RGB-D images for gender and ethnicity identification

    NASA Astrophysics Data System (ADS)

    Azzakhnini, Safaa; Ballihi, Lahoucine; Aboutajdine, Driss

    2016-11-01

    The development of sophisticated sensor technologies gave rise to an interesting variety of data. With the appearance of affordable devices, such as the Microsoft Kinect, depth-maps and three-dimensional data became easily accessible. This attracted many computer vision researchers seeking to exploit this information in classification and recognition tasks. In this work, the problem of face classification in the context of RGB images and depth information (RGB-D images) is addressed. The purpose of this paper is to study and compare some popular techniques for gender recognition and ethnicity classification to understand how much depth data can improve the quality of recognition. Furthermore, we investigate which combination of face descriptors, feature selection methods, and learning techniques is best suited to better exploit RGB-D images. The experimental results show that depth data improve the recognition accuracy for gender and ethnicity classification applications in many use cases.

  1. Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering

    PubMed Central

    Shin, Kwang Yong; Park, Young Ho; Nguyen, Dat Tien; Park, Kang Ryoung

    2014-01-01

    Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods. PMID:24549251

  2. GeneSilico protein structure prediction meta-server.

    PubMed

    Kurowski, Michal A; Bujnicki, Janusz M

    2003-07-01

    Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta.

  3. GeneSilico protein structure prediction meta-server

    PubMed Central

    Kurowski, Michal A.; Bujnicki, Janusz M.

    2003-01-01

    Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta. PMID:12824313

  4. Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.

    Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less

  5. Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-10-01

    Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less

  6. A DFT-Based Method of Feature Extraction for Palmprint Recognition

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru

    Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.

  7. Adding Recognition Discriminability Index to the Delayed Recall Is Useful to Predict Conversion from Mild Cognitive Impairment to Alzheimer's Disease in the Alzheimer's Disease Neuroimaging Initiative.

    PubMed

    Russo, María J; Campos, Jorge; Vázquez, Silvia; Sevlever, Gustavo; Allegri, Ricardo F

    2017-01-01

    Background: Ongoing research is focusing on the identification of those individuals with mild cognitive impairment (MCI) who are most likely to convert to Alzheimer's disease (AD). We investigated whether recognition memory tasks in combination with delayed recall measure of episodic memory and CSF biomarkers can predict MCI to AD conversion at 24-month follow-up. Methods: A total of 397 amnestic-MCI subjects from Alzheimer's disease Neuroimaging Initiative were included. Logistic regression modeling was done to assess the predictive value of all RAVLT measures, risk factors such as age, sex, education, APOE genotype, and CSF biomarkers for progression to AD. Estimating adjusted odds ratios was used to determine which variables would produce an optimal predictive model, and whether adding tests of interaction between the RAVLT Delayed Recall and recognition measures (traditional score and d-prime) would improve prediction of the conversion from a-MCI to AD. Results: 112 (28.2%) subjects developed dementia and 285 (71.8%) subjects did not. Of the all included variables, CSF Aβ1-42 levels, RAVLT Delayed Recall, and the combination of RAVLT Delayed Recall and d-prime were predictive of progression to AD (χ 2 = 38.23, df = 14, p < 0.001). Conclusions: The combination of RAVLT Delayed Recall and d-prime measures may be predictor of conversion from MCI to AD in the ADNI cohort, especially in combination with amyloid biomarkers. A predictive model to help identify individuals at-risk for dementia should include not only traditional episodic memory measures (delayed recall or recognition), but also additional variables (d-prime) that allow the homogenization of the assessment procedures in the diagnosis of MCI.

  8. Adding Recognition Discriminability Index to the Delayed Recall Is Useful to Predict Conversion from Mild Cognitive Impairment to Alzheimer's Disease in the Alzheimer's Disease Neuroimaging Initiative

    PubMed Central

    Russo, María J.; Campos, Jorge; Vázquez, Silvia; Sevlever, Gustavo; Allegri, Ricardo F.; Weiner, Michael W.

    2017-01-01

    Background: Ongoing research is focusing on the identification of those individuals with mild cognitive impairment (MCI) who are most likely to convert to Alzheimer's disease (AD). We investigated whether recognition memory tasks in combination with delayed recall measure of episodic memory and CSF biomarkers can predict MCI to AD conversion at 24-month follow-up. Methods: A total of 397 amnestic-MCI subjects from Alzheimer's disease Neuroimaging Initiative were included. Logistic regression modeling was done to assess the predictive value of all RAVLT measures, risk factors such as age, sex, education, APOE genotype, and CSF biomarkers for progression to AD. Estimating adjusted odds ratios was used to determine which variables would produce an optimal predictive model, and whether adding tests of interaction between the RAVLT Delayed Recall and recognition measures (traditional score and d-prime) would improve prediction of the conversion from a-MCI to AD. Results: 112 (28.2%) subjects developed dementia and 285 (71.8%) subjects did not. Of the all included variables, CSF Aβ1-42 levels, RAVLT Delayed Recall, and the combination of RAVLT Delayed Recall and d-prime were predictive of progression to AD (χ2 = 38.23, df = 14, p < 0.001). Conclusions: The combination of RAVLT Delayed Recall and d-prime measures may be predictor of conversion from MCI to AD in the ADNI cohort, especially in combination with amyloid biomarkers. A predictive model to help identify individuals at-risk for dementia should include not only traditional episodic memory measures (delayed recall or recognition), but also additional variables (d-prime) that allow the homogenization of the assessment procedures in the diagnosis of MCI. PMID:28344552

  9. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  10. Scene text recognition in mobile applications by character descriptor and structure configuration.

    PubMed

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  11. Learning Human Actions by Combining Global Dynamics and Local Appearance.

    PubMed

    Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J

    2014-12-01

    In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.

  12. An Analysis of Intrinsic and Extrinsic Hand Muscle EMG for Improved Pattern Recognition Control.

    PubMed

    Adewuyi, Adenike A; Hargrove, Levi J; Kuiken, Todd A

    2016-04-01

    Pattern recognition control combined with surface electromyography (EMG) from the extrinsic hand muscles has shown great promise for control of multiple prosthetic functions for transradial amputees. There is, however, a need to adapt this control method when implemented for partial-hand amputees, who possess both a functional wrist and information-rich residual intrinsic hand muscles. We demonstrate that combining EMG data from both intrinsic and extrinsic hand muscles to classify hand grasps and finger motions allows up to 19 classes of hand grasps and individual finger motions to be decoded, with an accuracy of 96% for non-amputees and 85% for partial-hand amputees. We evaluated real-time pattern recognition control of three hand motions in seven different wrist positions. We found that a system trained with both intrinsic and extrinsic muscle EMG data, collected while statically and dynamically varying wrist position increased completion rates from 73% to 96% for partial-hand amputees and from 88% to 100% for non-amputees when compared to a system trained with only extrinsic muscle EMG data collected in a neutral wrist position. Our study shows that incorporating intrinsic muscle EMG data and wrist motion can significantly improve the robustness of pattern recognition control for application to partial-hand prosthetic control.

  13. An Analysis of Intrinsic and Extrinsic Hand Muscle EMG for Improved Pattern Recognition Control

    PubMed Central

    Adewuyi, Adenike A.; Hargrove, Levi J.; Kuiken, Todd A.

    2015-01-01

    Pattern recognition control combined with surface electromyography (EMG) from the extrinsic hand muscles has shown great promise for control of multiple prosthetic functions for transradial amputees. There is, however, a need to adapt this control method when implemented for partial-hand amputees, who possess both a functional wrist and information-rich residual intrinsic hand muscles. We demonstrate that combining EMG data from both intrinsic and extrinsic hand muscles to classify hand grasps and finger motions allows up to 19 classes of hand grasps and individual finger motions to be decoded, with an accuracy of 96% for non-amputees and 85% for partial-hand amputees. We evaluated real-time pattern recognition control of three hand motions in seven different wrist positions. We found that a system trained with both intrinsic and extrinsic muscle EMG data, collected while statically and dynamically varying wrist position increased completion rates from 73% to 96% for partial-hand amputees and from 88% to 100% for non-amputees when compared to a system trained with only extrinsic muscle EMG data collected in a neutral wrist position. Our study shows that incorporating intrinsic muscle EMG data and wrist motion can significantly improve the robustness of pattern recognition control for partial-hand applications. PMID:25955989

  14. Robust recognition of loud and Lombard speech in the fighter cockpit environment

    NASA Astrophysics Data System (ADS)

    Stanton, Bill J., Jr.

    1988-08-01

    There are a number of challenges associated with incorporating speech recognition technology into the fighter cockpit. One of the major problems is the wide range of variability in the pilot's voice. That can result from changing levels of stress and workload. Increasing the training set to include abnormal speech is not an attractive option because of the innumerable conditions that would have to be represented and the inordinate amount of time to collect such a training set. A more promising approach is to study subsets of abnormal speech that have been produced under controlled cockpit conditions with the purpose of characterizing reliable shifts that occur relative to normal speech. Such was the initiative of this research. Analyses were conducted for 18 features on 17671 phoneme tokens across eight speakers for normal, loud, and Lombard speech. It was discovered that there was a consistent migration of energy in the sonorants. This discovery of reliable energy shifts led to the development of a method to reduce or eliminate these shifts in the Euclidean distances between LPC log magnitude spectra. This combination significantly improved recognition performance of loud and Lombard speech. Discrepancies in recognition error rates between normal and abnormal speech were reduced by approximately 50 percent for all eight speakers combined.

  15. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  16. Differentiation of tea varieties using UV-Vis spectra and pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Palacios-Morillo, Ana; Alcázar, Ángela.; de Pablos, Fernando; Jurado, José Marcos

    2013-02-01

    Tea, one of the most consumed beverages all over the world, is of great importance in the economies of a number of countries. Several methods have been developed to classify tea varieties or origins based in pattern recognition techniques applied to chemical data, such as metal profile, amino acids, catechins and volatile compounds. Some of these analytical methods become tedious and expensive to be applied in routine works. The use of UV-Vis spectral data as discriminant variables, highly influenced by the chemical composition, can be an alternative to these methods. UV-Vis spectra of methanol-water extracts of tea have been obtained in the interval 250-800 nm. Absorbances have been used as input variables. Principal component analysis was used to reduce the number of variables and several pattern recognition methods, such as linear discriminant analysis, support vector machines and artificial neural networks, have been applied in order to differentiate the most common tea varieties. A successful classification model was built by combining principal component analysis and multilayer perceptron artificial neural networks, allowing the differentiation between tea varieties. This rapid and simple methodology can be applied to solve classification problems in food industry saving economic resources.

  17. Hierarchical Shared Control of Cane-Type Walking-Aid Robot

    PubMed Central

    Tao, Chunjing

    2017-01-01

    A hierarchical shared-control method of the walking-aid robot for both human motion intention recognition and the obstacle emergency-avoidance method based on artificial potential field (APF) is proposed in this paper. The human motion intention is obtained from the interaction force measurements of the sensory system composed of 4 force-sensing registers (FSR) and a torque sensor. Meanwhile, a laser-range finder (LRF) forward is applied to detect the obstacles and try to guide the operator based on the repulsion force calculated by artificial potential field. An obstacle emergency-avoidance method which comprises different control strategies is also assumed according to the different states of obstacles or emergency cases. To ensure the user's safety, the hierarchical shared-control method combines the intention recognition method with the obstacle emergency-avoidance method based on the distance between the walking-aid robot and the obstacles. At last, experiments validate the effectiveness of the proposed hierarchical shared-control method. PMID:29093805

  18. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  19. [Multi-Target Recognition of Internal and External Defects of Potato by Semi-Transmission Hyperspectral Imaging and Manifold Learning Algorithm].

    PubMed

    Huang, Tao; Li, Xiao-yu; Jin, Rui; Ku, Jing; Xu, Sen-miao; Xu, Meng-ling; Wu, Zhen-zhong; Kong, De-guo

    2015-04-01

    The present paper put forward a non-destructive detection method which combines semi-transmission hyperspectral imaging technology with manifold learning dimension reduction algorithm and least squares support vector machine (LSSVM) to recognize internal and external defects in potatoes simultaneously. Three hundred fifteen potatoes were bought in farmers market as research object, and semi-transmission hyperspectral image acquisition system was constructed to acquire the hyperspectral images of normal external defects (bud and green rind) and internal defect (hollow heart) potatoes. In order to conform to the actual production, defect part is randomly put right, side and back to the acquisition probe when the hyperspectral images of external defects potatoes are acquired. The average spectrums (390-1,040 nm) were extracted from the region of interests for spectral preprocessing. Then three kinds of manifold learning algorithm were respectively utilized to reduce the dimension of spectrum data, including supervised locally linear embedding (SLLE), locally linear embedding (LLE) and isometric mapping (ISOMAP), the low-dimensional data gotten by manifold learning algorithms is used as model input, Error Correcting Output Code (ECOC) and LSSVM were combined to develop the multi-target classification model. By comparing and analyzing results of the three models, we concluded that SLLE is the optimal manifold learning dimension reduction algorithm, and the SLLE-LSSVM model is determined to get the best recognition rate for recognizing internal and external defects potatoes. For test set data, the single recognition rate of normal, bud, green rind and hollow heart potato reached 96.83%, 86.96%, 86.96% and 95% respectively, and he hybrid recognition rate was 93.02%. The results indicate that combining the semi-transmission hyperspectral imaging technology with SLLE-LSSVM is a feasible qualitative analytical method which can simultaneously recognize the internal and external defects potatoes and also provide technical reference for rapid on-line non-destructive detecting of the internal and external defects potatoes.

  20. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, J.F.; Ng, L.C.

    1998-03-17

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.

  1. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    1998-01-01

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used for purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching.

  2. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    PubMed

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  4. Application of unsupervised pattern recognition approaches for exploration of rare earth elements in Se-Chahun iron ore, central Iran

    NASA Astrophysics Data System (ADS)

    Sarparandeh, Mohammadali; Hezarkhani, Ardeshir

    2017-12-01

    The use of efficient methods for data processing has always been of interest to researchers in the field of earth sciences. Pattern recognition techniques are appropriate methods for high-dimensional data such as geochemical data. Evaluation of the geochemical distribution of rare earth elements (REEs) requires the use of such methods. In particular, the multivariate nature of REE data makes them a good target for numerical analysis. The main subject of this paper is application of unsupervised pattern recognition approaches in evaluating geochemical distribution of REEs in the Kiruna type magnetite-apatite deposit of Se-Chahun. For this purpose, 42 bulk lithology samples were collected from the Se-Chahun iron ore deposit. In this study, 14 rare earth elements were measured with inductively coupled plasma mass spectrometry (ICP-MS). Pattern recognition makes it possible to evaluate the relations between the samples based on all these 14 features, simultaneously. In addition to providing easy solutions, discovery of the hidden information and relations of data samples is the advantage of these methods. Therefore, four clustering methods (unsupervised pattern recognition) - including a modified basic sequential algorithmic scheme (MBSAS), hierarchical (agglomerative) clustering, k-means clustering and self-organizing map (SOM) - were applied and results were evaluated using the silhouette criterion. Samples were clustered in four types. Finally, the results of this study were validated with geological facts and analysis results from, for example, scanning electron microscopy (SEM), X-ray diffraction (XRD), ICP-MS and optical mineralogy. The results of the k-means clustering and SOM methods have the best matches with reality, with experimental studies of samples and with field surveys. Since only the rare earth elements are used in this division, a good agreement of the results with lithology is considerable. It is concluded that the combination of the proposed methods and geological studies leads to finding some hidden information, and this approach has the best results compared to using only one of them.

  5. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  6. Electrochemical impedimetric sensor based on molecularly imprinted polymers/sol-gel chemistry for methidathion organophosphorous insecticide recognition.

    PubMed

    Bakas, Idriss; Hayat, Akhtar; Piletsky, Sergey; Piletska, Elena; Chehimi, Mohamed M; Noguer, Thierry; Rouillon, Régis

    2014-12-01

    We report here a novel method to detect methidathion organophosphorous insecticides. The sensing platform was architected by the combination of molecularly imprinted polymers and sol-gel technique on inexpensive, portable and disposable screen printed carbon electrodes. Electrochemical impedimetric detection technique was employed to perform the label free detection of the target analyte on the designed MIP/sol-gel integrated platform. The selection of the target specific monomer by electrochemical impedimetric methods was consistent with the results obtained by the computational modelling method. The prepared electrochemical MIP/sol-gel based sensor exhibited a high recognition capability toward methidathion, as well as a broad linear range and a low detection limit under the optimized conditions. Satisfactory results were also obtained for the methidathion determination in waste water samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  8. Information Theory and Voting Based Consensus Clustering for Combining Multiple Clusterings of Chemical Structures.

    PubMed

    Saeed, Faisal; Salim, Naomie; Abdo, Ammar

    2013-07-01

    Many consensus clustering methods have been applied in different areas such as pattern recognition, machine learning, information theory and bioinformatics. However, few methods have been used for chemical compounds clustering. In this paper, an information theory and voting based algorithm (Adaptive Cumulative Voting-based Aggregation Algorithm A-CVAA) was examined for combining multiple clusterings of chemical structures. The effectiveness of clusterings was evaluated based on the ability of the clustering method to separate active from inactive molecules in each cluster, and the results were compared with Ward's method. The chemical dataset MDL Drug Data Report (MDDR) and the Maximum Unbiased Validation (MUV) dataset were used. Experiments suggest that the adaptive cumulative voting-based consensus method can improve the effectiveness of combining multiple clusterings of chemical structures. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  10. Exploring the feasibility of traditional image querying tasks for industrial radiographs

    NASA Astrophysics Data System (ADS)

    Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.

    2015-08-01

    Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.

  11. Recognition of Telugu characters using neural networks.

    PubMed

    Sukhaswami, M B; Seetharamulu, P; Pujari, A K

    1995-09-01

    The aim of the present work is to recognize printed and handwritten Telugu characters using artificial neural networks (ANNs). Earlier work on recognition of Telugu characters has been done using conventional pattern recognition techniques. We make an initial attempt here of using neural networks for recognition with the aim of improving upon earlier methods which do not perform effectively in the presence of noise and distortion in the characters. The Hopfield model of neural network working as an associative memory is chosen for recognition purposes initially. Due to limitation in the capacity of the Hopfield neural network, we propose a new scheme named here as the Multiple Neural Network Associative Memory (MNNAM). The limitation in storage capacity has been overcome by combining multiple neural networks which work in parallel. It is also demonstrated that the Hopfield network is suitable for recognizing noisy printed characters as well as handwritten characters written by different "hands" in a variety of styles. Detailed experiments have been carried out using several learning strategies and results are reported. It is shown here that satisfactory recognition is possible using the proposed strategy. A detailed preprocessing scheme of the Telugu characters from digitized documents is also described.

  12. Dynamic Assessment of Water Quality Based on a Variable Fuzzy Pattern Recognition Model

    PubMed Central

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-01-01

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results. PMID:25689998

  13. Forensic Facial Reconstruction: The Final Frontier.

    PubMed

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  14. Dynamic assessment of water quality based on a variable fuzzy pattern recognition model.

    PubMed

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-02-16

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results.

  15. Capturing specific abilities as a window into human individuality: the example of face recognition.

    PubMed

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  16. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  17. Novel real-time tumor-contouring method using deep learning to prevent mistracking in X-ray fluoroscopy.

    PubMed

    Terunuma, Toshiyuki; Tokui, Aoi; Sakae, Takeji

    2018-03-01

    Robustness to obstacles is the most important factor necessary to achieve accurate tumor tracking without fiducial markers. Some high-density structures, such as bone, are enhanced on X-ray fluoroscopic images, which cause tumor mistracking. Tumor tracking should be performed by controlling "importance recognition": the understanding that soft-tissue is an important tracking feature and bone structure is unimportant. We propose a new real-time tumor-contouring method that uses deep learning with importance recognition control. The novelty of the proposed method is the combination of the devised random overlay method and supervised deep learning to induce the recognition of structures in tumor contouring as important or unimportant. This method can be used for tumor contouring because it uses deep learning to perform image segmentation. Our results from a simulated fluoroscopy model showed accurate tracking of a low-visibility tumor with an error of approximately 1 mm, even if enhanced bone structure acted as an obstacle. A high similarity of approximately 0.95 on the Jaccard index was observed between the segmented and ground truth tumor regions. A short processing time of 25 ms was achieved. The results of this simulated fluoroscopy model support the feasibility of robust real-time tumor contouring with fluoroscopy. Further studies using clinical fluoroscopy are highly anticipated.

  18. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    PubMed

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  19. Face Recognition From One Example View.

    DTIC Science & Technology

    1995-09-01

    Proceedings, International Workshop on Automatic Face- and Gesture-Recognition, pages 248{253, Zurich, 1995. [32] Yael Moses, Shimon Ullman, and Shimon...recognition. Journal of Cognitive Neuroscience, 3(1):71{86, 1991. [49] Shimon Ullman and Ronen Basri. Recognition by linear combinations of models

  20. Target recognition of log-polar ladar range images using moment invariants

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  1. Robust wafer identification recognition based on asterisk-shape filter and high-low score comparison method.

    PubMed

    Hsu, Wei-Chih; Yu, Tsan-Ying; Chen, Kuan-Liang

    2009-12-10

    Wafer identifications (wafer ID) can be used to identify wafers from each other so that wafer processing can be traced easily. Wafer ID recognition is one of the problems of optical character recognition. The process to recognize wafer IDs is similar to that used in recognizing car license-plate characters. However, due to some unique characteristics, such as the irregular space between two characters and the unsuccessive strokes of wafer ID, it will not get a good result to recognize wafer ID by directly utilizing the approaches used in car license-plate character recognition. Wafer ID scratches are engraved by a laser scribe almost along the following four fixed directions: horizontal, vertical, plus 45 degrees , and minus 45 degrees orientations. The closer to the center line of a wafer ID scratch, the higher the gray level will be. These and other characteristics increase the difficulty to recognize the wafer ID. In this paper a wafer ID recognition scheme based on an asterisk-shape filter and a high-low score comparison method is proposed to cope with the serious influence of uneven luminance and make recognition more efficiently. Our proposed approach consists of some processing stages. Especially in the final recognition stage, a template-matching method combined with stroke analysis is used as a recognizing scheme. This is because wafer IDs are composed of Semiconductor Equipment and Materials International (SEMI) standard Arabic numbers and English alphabets, and thus the template ID images are easy to obtain. Furthermore, compared with the approach that requires prior training, such as a support vector machine, which often needs a large amount of training image samples, no prior training is required for our approach. The testing results show that our proposed scheme can efficiently and correctly segment out and recognize the wafer ID with high performance.

  2. The study of infrared target recognition at sea background based on visual attention computational model

    NASA Astrophysics Data System (ADS)

    Wang, Deng-wei; Zhang, Tian-xu; Shi, Wen-jun; Wei, Long-sheng; Wang, Xiao-ping; Ao, Guo-qing

    2009-07-01

    Infrared images at sea background are notorious for the low signal-to-noise ratio, therefore, the target recognition of infrared image through traditional methods is very difficult. In this paper, we present a novel target recognition method based on the integration of visual attention computational model and conventional approach (selective filtering and segmentation). The two distinct techniques for image processing are combined in a manner to utilize the strengths of both. The visual attention algorithm searches the salient regions automatically, and represented them by a set of winner points, at the same time, demonstrated the salient regions in terms of circles centered at these winner points. This provides a priori knowledge for the filtering and segmentation process. Based on the winner point, we construct a rectangular region to facilitate the filtering and segmentation, then the labeling operation will be added selectively by requirement. Making use of the labeled information, from the final segmentation result we obtain the positional information of the interested region, label the centroid on the corresponding original image, and finish the localization for the target. The cost time does not depend on the size of the image but the salient regions, therefore the consumed time is greatly reduced. The method is used in the recognition of several kinds of real infrared images, and the experimental results reveal the effectiveness of the algorithm presented in this paper.

  3. SVM-Fold: a tool for discriminative multi-class protein fold and superfamily recognition

    PubMed Central

    Melvin, Iain; Ie, Eugene; Kuang, Rui; Weston, Jason; Stafford, William Noble; Leslie, Christina

    2007-01-01

    Background Predicting a protein's structural class from its amino acid sequence is a fundamental problem in computational biology. Much recent work has focused on developing new representations for protein sequences, called string kernels, for use with support vector machine (SVM) classifiers. However, while some of these approaches exhibit state-of-the-art performance at the binary protein classification problem, i.e. discriminating between a particular protein class and all other classes, few of these studies have addressed the real problem of multi-class superfamily or fold recognition. Moreover, there are only limited software tools and systems for SVM-based protein classification available to the bioinformatics community. Results We present a new multi-class SVM-based protein fold and superfamily recognition system and web server called SVM-Fold, which can be found at . Our system uses an efficient implementation of a state-of-the-art string kernel for sequence profiles, called the profile kernel, where the underlying feature representation is a histogram of inexact matching k-mer frequencies. We also employ a novel machine learning approach to solve the difficult multi-class problem of classifying a sequence of amino acids into one of many known protein structural classes. Binary one-vs-the-rest SVM classifiers that are trained to recognize individual structural classes yield prediction scores that are not comparable, so that standard "one-vs-all" classification fails to perform well. Moreover, SVMs for classes at different levels of the protein structural hierarchy may make useful predictions, but one-vs-all does not try to combine these multiple predictions. To deal with these problems, our method learns relative weights between one-vs-the-rest classifiers and encodes information about the protein structural hierarchy for multi-class prediction. In large-scale benchmark results based on the SCOP database, our code weighting approach significantly improves on the standard one-vs-all method for both the superfamily and fold prediction in the remote homology setting and on the fold recognition problem. Moreover, our code weight learning algorithm strongly outperforms nearest-neighbor methods based on PSI-BLAST in terms of prediction accuracy on every structure classification problem we consider. Conclusion By combining state-of-the-art SVM kernel methods with a novel multi-class algorithm, the SVM-Fold system delivers efficient and accurate protein fold and superfamily recognition. PMID:17570145

  4. "La Palabra Es Salud" (The Word Is Health): Combining Mixed Methods and CBPR to Understand the Comparative Effectiveness of Popular and Conventional Education

    ERIC Educational Resources Information Center

    Wiggins, Noelle; Hughes, Adele; Rodriguez, Adriana; Potter, Catherine; Rios-Campos, Teresa

    2014-01-01

    Increasing recognition of the role of social conditions in health has led to calls for methods that can be used to change social conditions. Popular education has demonstrated great promise as a methodology that can be used to address the underlying social and structural determinants of health. To date, most studies of popular education have used…

  5. A control method of the rotor re-levitation for different orbit responses during touchdowns in active magnetic bearings

    NASA Astrophysics Data System (ADS)

    Lyu, Mindong; Liu, Tao; Wang, Zixi; Yan, Shaoze; Jia, Xiaohong; Wang, Yuming

    2018-05-01

    Touchdown can make active magnetic bearings (AMB) unable to work, and bring severe damages to touchdown bearings (TDB). To resolve it, we presents a novel re-levitation method consisting of two operations, i.e., orbit response recognition and rotor re-levitation. In the operation of orbit response recognition, the three orbit responses (pendulum vibration, combined rub and bouncing, and full rub) can be identified by the expectation of radial displacement of rotor and expectation of instantaneous frequency (IF) of rotor motion in the sampling period. In the rotor re-levitation operation, a decentralized PID control algorithm is employed for pendulum vibration and combined rub and bouncing, and the decentralized PID control algorithm and another whirl damping algorithm, in which the weighting factor is determined by the whirl frequency, are jointly executed for the full rub. The method has been demonstrated by the simulation results of an AMB model. The results reveal that the method is effective in actively suppressing the whirl motion and promptly re-levitating the rotor. As the PID control algorithm and the simple operations of signal processing are employed, the algorithm has a low computation intensity, which makes it more easily realized in practical applications.

  6. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  7. A cloud shadow detection method combined with cloud height iteration and spectral analysis for Landsat 8 OLI data

    NASA Astrophysics Data System (ADS)

    Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying

    2018-04-01

    Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.

  8. Signal recognition and parameter estimation of BPSK-LFM combined modulation

    NASA Astrophysics Data System (ADS)

    Long, Chao; Zhang, Lin; Liu, Yu

    2015-07-01

    Intra-pulse analysis plays an important role in electronic warfare. Intra-pulse feature abstraction focuses on primary parameters such as instantaneous frequency, modulation, and symbol rate. In this paper, automatic modulation recognition and feature extraction for combined BPSK-LFM modulation signals based on decision theoretic approach is studied. The simulation results show good recognition effect and high estimation precision, and the system is easy to be realized.

  9. Characterization of the interaction of interleukin-8 with hyaluronan, chondroitin sulfate, dermatan sulfate and their sulfated derivatives by spectroscopy and molecular modeling.

    PubMed

    Pichert, Annelie; Samsonov, Sergey A; Theisgen, Stephan; Thomas, Lars; Baumann, Lars; Schiller, Jürgen; Beck-Sickinger, Annette G; Huster, Daniel; Pisabarro, M Teresa

    2012-01-01

    The interactions between glycosaminoglycans (GAGs), important components of the extracellular matrix, and proteins such as growth factors and chemokines play critical roles in cellular regulation processes. Therefore, the design of GAG derivatives for the development of innovative materials with bio-like properties in terms of their interaction with regulatory proteins is of great interest for tissue engineering and regenerative medicine. Previous work on the chemokine interleukin-8 (IL-8) has focused on its interaction with heparin and heparan sulfate, which regulate chemokine function. However, the extracellular matrix contains other GAGs, such as hyaluronic acid (HA), dermatan sulfate (DS) and chondroitin sulfate (CS), which have so far not been characterized in terms of their distinct molecular recognition properties towards IL-8 in relation to their length and sulfation patterns. NMR and molecular modeling have been in great part the methods of choice to study the structural and recognition properties of GAGs and their protein complexes. However, separately these methods have challenges to cope with the high degree of similarity and flexibility that GAGs exhibit. In this work, we combine fluorescence spectroscopy, NMR experiments, docking and molecular dynamics simulations to study the configurational and recognition properties of IL-8 towards a series of HA and CS derivatives and DS. We analyze the effects of GAG length and sulfation patterns in binding strength and specificity, and the influence of GAG binding on IL-8 dimer formation. Our results highlight the importance of combining experimental and theoretical approaches to obtain a better understanding of the molecular recognition properties of GAG-protein systems.

  10. Case study of 3D fingerprints applications

    PubMed Central

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. PMID:28399141

  11. Learning target masks in infrared linescan imagery

    NASA Astrophysics Data System (ADS)

    Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter

    1997-04-01

    In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.

  12. Case study of 3D fingerprints applications.

    PubMed

    Liu, Feng; Liang, Jinrong; Shen, Linlin; Yang, Meng; Zhang, David; Lai, Zhihui

    2017-01-01

    Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  13. On the Use of Evolutionary Algorithms to Improve the Robustness of Continuous Speech Recognition Systems in Adverse Conditions

    NASA Astrophysics Data System (ADS)

    Selouani, Sid-Ahmed; O'Shaughnessy, Douglas

    2003-12-01

    Limiting the decrease in performance due to acoustic environment changes remains a major challenge for continuous speech recognition (CSR) systems. We propose a novel approach which combines the Karhunen-Loève transform (KLT) in the mel-frequency domain with a genetic algorithm (GA) to enhance the data representing corrupted speech. The idea consists of projecting noisy speech parameters onto the space generated by the genetically optimized principal axis issued from the KLT. The enhanced parameters increase the recognition rate for highly interfering noise environments. The proposed hybrid technique, when included in the front-end of an HTK-based CSR system, outperforms that of the conventional recognition process in severe interfering car noise environments for a wide range of signal-to-noise ratios (SNRs) varying from 16 dB to[InlineEquation not available: see fulltext.] dB. We also showed the effectiveness of the KLT-GA method in recognizing speech subject to telephone channel degradations.

  14. Progressive sparse representation-based classification using local discrete cosine transform evaluation for image recognition

    NASA Astrophysics Data System (ADS)

    Song, Xiaoning; Feng, Zhen-Hua; Hu, Guosheng; Yang, Xibei; Yang, Jingyu; Qi, Yunsong

    2015-09-01

    This paper proposes a progressive sparse representation-based classification algorithm using local discrete cosine transform (DCT) evaluation to perform face recognition. Specifically, the sum of the contributions of all training samples of each subject is first taken as the contribution of this subject, then the redundant subject with the smallest contribution to the test sample is iteratively eliminated. Second, the progressive method aims at representing the test sample as a linear combination of all the remaining training samples, by which the representation capability of each training sample is exploited to determine the optimal "nearest neighbors" for the test sample. Third, the transformed DCT evaluation is constructed to measure the similarity between the test sample and each local training sample using cosine distance metrics in the DCT domain. The final goal of the proposed method is to determine an optimal weighted sum of nearest neighbors that are obtained under the local correlative degree evaluation, which is approximately equal to the test sample, and we can use this weighted linear combination to perform robust classification. Experimental results conducted on the ORL database of faces (created by the Olivetti Research Laboratory in Cambridge), the FERET face database (managed by the Defense Advanced Research Projects Agency and the National Institute of Standards and Technology), AR face database (created by Aleix Martinez and Robert Benavente in the Computer Vision Center at U.A.B), and USPS handwritten digit database (gathered at the Center of Excellence in Document Analysis and Recognition at SUNY Buffalo) demonstrate the effectiveness of the proposed method.

  15. Feature engineering for drug name recognition in biomedical texts: feature conjunction and feature selection.

    PubMed

    Liu, Shengyu; Tang, Buzhou; Chen, Qingcai; Wang, Xiaolong; Fan, Xiaoming

    2015-01-01

    Drug name recognition (DNR) is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge.

  16. Speech coding, reconstruction and recognition using acoustics and electromagnetic waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holzrichter, J.F.; Ng, L.C.

    The use of EM radiation in conjunction with simultaneously recorded acoustic speech information enables a complete mathematical coding of acoustic speech. The methods include the forming of a feature vector for each pitch period of voiced speech and the forming of feature vectors for each time frame of unvoiced, as well as for combined voiced and unvoiced speech. The methods include how to deconvolve the speech excitation function from the acoustic speech output to describe the transfer function each time frame. The formation of feature vectors defining all acoustic speech units over well defined time frames can be used formore » purposes of speech coding, speech compression, speaker identification, language-of-speech identification, speech recognition, speech synthesis, speech translation, speech telephony, and speech teaching. 35 figs.« less

  17. Application of lifting wavelet and random forest in compound fault diagnosis of gearbox

    NASA Astrophysics Data System (ADS)

    Chen, Tang; Cui, Yulian; Feng, Fuzhou; Wu, Chunzhi

    2018-03-01

    Aiming at the weakness of compound fault characteristic signals of a gearbox of an armored vehicle and difficult to identify fault types, a fault diagnosis method based on lifting wavelet and random forest is proposed. First of all, this method uses the lifting wavelet transform to decompose the original vibration signal in multi-layers, reconstructs the multi-layer low-frequency and high-frequency components obtained by the decomposition to get multiple component signals. Then the time-domain feature parameters are obtained for each component signal to form multiple feature vectors, which is input into the random forest pattern recognition classifier to determine the compound fault type. Finally, a variety of compound fault data of the gearbox fault analog test platform are verified, the results show that the recognition accuracy of the fault diagnosis method combined with the lifting wavelet and the random forest is up to 99.99%.

  18. Recognition of rotated images using the multi-valued neuron and rotation-invariant 2D Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio

    2012-03-01

    The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.

  19. Multimodal fusion of polynomial classifiers for automatic person recgonition

    NASA Astrophysics Data System (ADS)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.

  20. Practical automatic Arabic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  1. Vibrational frequencies of anti-diabetic drug studied by terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Du, S. Q.; Li, H.; Xie, L.; Chen, L.; Peng, Y.; Zhu, Y. M.; Li, H.; Dong, P.; Wang, J. T.

    2012-04-01

    By using terahertz time-domain spectroscopy, the absorption spectra of seven anti-diabetic pills have been investigated. For gliquidone, glipizide, gliclazide, and glimepiride, an obvious resonance peak is found at 1.37 THz. Furthermore, to overcome the limit of density functional theory that can analyze the normal mode frequencies of the ground state of organic material, we also present a method that relies on pharmacophore recognition, from which we can obtain the resonance peak at 1.37 THz can be attributed to the vibration of sulfonylurea group. The results indicate that the veracity of density functional theory can be increased by combining pharmacophore recognition.

  2. Recognition of upper airway and surrounding structures at MRI in pediatric PCOS and OSAS

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, J. K.; Odhner, D.; Sin, Sanghun; Arens, Raanan

    2013-03-01

    Obstructive Sleep Apnea Syndrome (OSAS) is common in obese children with risk being 4.5 fold compared to normal control subjects. Polycystic Ovary Syndrome (PCOS) has recently been shown to be associated with OSAS that may further lead to significant cardiovascular and neuro-cognitive deficits. We are investigating image-based biomarkers to understand the architectural and dynamic changes in the upper airway and the surrounding hard and soft tissue structures via MRI in obese teenage children to study OSAS. At the previous SPIE conferences, we presented methods underlying Fuzzy Object Models (FOMs) for Automatic Anatomy Recognition (AAR) based on CT images of the thorax and the abdomen. The purpose of this paper is to demonstrate that the AAR approach is applicable to a different body region and image modality combination, namely in the study of upper airway structures via MRI. FOMs were built hierarchically, the smaller sub-objects forming the offspring of larger parent objects. FOMs encode the uncertainty and variability present in the form and relationships among the objects over a study population. Totally 11 basic objects (17 including composite) were modeled. Automatic recognition for the best pose of FOMs in a given image was implemented by using four methods - a one-shot method that does not require search, another three searching methods that include Fisher Linear Discriminate (FLD), a b-scale energy optimization strategy, and optimum threshold recognition method. In all, 30 multi-fold cross validation experiments based on 15 patient MRI data sets were carried out to assess the accuracy of recognition. The results indicate that the objects can be recognized with an average location error of less than 5 mm or 2-3 voxels. Then the iterative relative fuzzy connectedness (IRFC) algorithm was adopted for delineation of the target organs based on the recognized results. The delineation results showed an overall FP and TP volume fraction of 0.02 and 0.93.

  3. Face recognition system using multiple face model of hybrid Fourier feature under uncontrolled illumination variation.

    PubMed

    Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo

    2011-04-01

    The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

  4. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users.

    PubMed

    Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie

    2015-03-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: A randomised, double-blind, placebo-controlled study in cannabis users

    PubMed Central

    Hindocha, Chandni; Freeman, Tom P.; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K.; Morgan, Celia J.A.; Curran, H. Valerie

    2015-01-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8 mg), CBD (16 mg), THC+CBD (8 mg+16 mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling ‘stoned’ was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being ‘stoned’. CBD did not influence feelings of ‘stoned’. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces. PMID:25534187

  6. Capturing specific abilities as a window into human individuality: The example of face recognition

    PubMed Central

    Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2013-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079

  7. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    PubMed Central

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  8. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    PubMed

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  9. A global optimization algorithm for protein surface alignment

    PubMed Central

    2010-01-01

    Background A relevant problem in drug design is the comparison and recognition of protein binding sites. Binding sites recognition is generally based on geometry often combined with physico-chemical properties of the site since the conformation, size and chemical composition of the protein surface are all relevant for the interaction with a specific ligand. Several matching strategies have been designed for the recognition of protein-ligand binding sites and of protein-protein interfaces but the problem cannot be considered solved. Results In this paper we propose a new method for local structural alignment of protein surfaces based on continuous global optimization techniques. Given the three-dimensional structures of two proteins, the method finds the isometric transformation (rotation plus translation) that best superimposes active regions of two structures. We draw our inspiration from the well-known Iterative Closest Point (ICP) method for three-dimensional (3D) shapes registration. Our main contribution is in the adoption of a controlled random search as a more efficient global optimization approach along with a new dissimilarity measure. The reported computational experience and comparison show viability of the proposed approach. Conclusions Our method performs well to detect similarity in binding sites when this in fact exists. In the future we plan to do a more comprehensive evaluation of the method by considering large datasets of non-redundant proteins and applying a clustering technique to the results of all comparisons to classify binding sites. PMID:20920230

  10. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  11. Online handwritten mathematical expression recognition

    NASA Astrophysics Data System (ADS)

    Büyükbayrak, Hakan; Yanikoglu, Berrin; Erçil, Aytül

    2007-01-01

    We describe a system for recognizing online, handwritten mathematical expressions. The system is designed with a user-interface for writing scientific articles, supporting the recognition of basic mathematical expressions as well as integrals, summations, matrices etc. A feed-forward neural network recognizes symbols which are assumed to be single-stroke and a recursive algorithm parses the expression by combining neural network output and the structure of the expression. Preliminary results show that writer-dependent recognition rates are very high (99.8%) while writer-independent symbol recognition rates are lower (75%). The interface associated with the proposed system integrates the built-in recognition capabilities of the Microsoft's Tablet PC API for recognizing textual input and supports conversion of hand-drawn figures into PNG format. This enables the user to enter text, mathematics and draw figures in a single interface. After recognition, all output is combined into one LATEX code and compiled into a PDF file.

  12. Computing multiple aggregation levels and contextual features for road facilities recognition using mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun

    2017-04-01

    In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.

  13. Lateral entorhinal cortex is necessary for associative but not nonassociative recognition memory

    PubMed Central

    Wilson, David IG; Watanabe, Sakurako; Milner, Helen; Ainge, James A

    2013-01-01

    The lateral entorhinal cortex (LEC) provides one of the two major input pathways to the hippocampus and has been suggested to process the nonspatial contextual details of episodic memory. Combined with spatial information from the medial entorhinal cortex it is hypothesised that this contextual information is used to form an integrated spatially selective, context-specific response in the hippocampus that underlies episodic memory. Recently, we reported that the LEC is required for recognition of objects that have been experienced in a specific context (Wilson et al. (2013) Hippocampus 23:352-366). Here, we sought to extend this work to assess the role of the LEC in recognition of all associative combinations of objects, places and contexts within an episode. Unlike controls, rats with excitotoxic lesions of the LEC showed no evidence of recognizing familiar combinations of object in place, place in context, or object in place and context. However, LEC lesioned rats showed normal recognition of objects and places independently from each other (nonassociative recognition). Together with our previous findings, these data suggest that the LEC is critical for associative recognition memory and may bind together information relating to objects, places, and contexts needed for episodic memory formation. PMID:23836525

  14. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  15. Pathways to Medical Home Recognition: A Qualitative Comparative Analysis of the PCMH Transformation Process.

    PubMed

    Mendel, Peter; Chen, Emily K; Green, Harold D; Armstrong, Courtney; Timbie, Justin W; Kress, Amii M; Friedberg, Mark W; Kahn, Katherine L

    2017-12-15

    To understand the process of practice transformation by identifying pathways for attaining patient-centered medical home (PCMH) recognition. The CMS Federally Qualified Health Center (FQHC) Advanced Primary Care Practice Demonstration was designed to help FQHCs achieve NCQA Level 3 PCMH recognition and improve patient outcomes. We used a stratified random sample of 20 (out of 503) participating sites for this analysis. We developed a conceptual model of structural, cultural, and implementation factors affecting PCMH transformation based on literature and initial qualitative interview themes. We then used conventional cross-case analysis, followed by qualitative comparative analysis (QCA), a cross-case method based on Boolean logic algorithms, to systematically identify pathways (i.e., combinations of factors) associated with attaining-or not attaining-Level 3 recognition. Site-level indicators were derived from semistructured interviews with site leaders at two points in time (mid- and late-implementation) and administrative data collected prior to and during the demonstration period. The QCA results identified five distinct pathways to attaining PCMH recognition and four distinct pathways to not attaining recognition by the end of the demonstration. Across these pathways, one condition (change leader capacity) was common to all pathways for attaining recognition, and another (previous improvement or recognition experience) was absent in all pathways for not attaining recognition. In general, sites could compensate for deficiencies in one factor with capacity in others, but they needed a threshold of strengths in cultural and implementation factors to attain PCMH recognition. Future efforts at primary care transformation should take into account multiple pathways sites may pursue. Sites should be assessed on key cultural and implementation factors, in addition to structural components, in order to differentiate interventions and technical assistance. © Health Research and Educational Trust.

  16. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE PAGES

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...

    2014-10-23

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  17. Pattern recognition with composite correlation filters designed with multi-object combinatorial optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres

    Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less

  18. A neural approach for improving the measurement capability of an electronic nose

    NASA Astrophysics Data System (ADS)

    Chimenti, M.; DeRossi, D.; Di Francesco, F.; Domenici, C.; Pieri, G.; Pioggia, G.; Salvetti, O.

    2003-06-01

    Electronic noses, instruments for automatic recognition of odours, are typically composed of an array of partially selective sensors, a sampling system, a data acquisition device and a data processing system. For the purpose of evaluating the quality of olive oil, an electronic nose based on an array of conducting polymer sensors capable of discriminating olive oil aromas was developed. The selection of suitable pattern recognition techniques for a particular application can enhance the performance of electronic noses. Therefore, an advanced neural recognition algorithm for improving the measurement capability of the device was designed and implemented. This method combines multivariate statistical analysis and a hierarchical neural-network architecture based on self-organizing maps and error back-propagation. The complete system was tested using samples composed of characteristic olive oil aromatic components in refined olive oil. The results obtained have shown that this approach is effective in grouping aromas into different categories representative of their chemical structure.

  19. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  20. A medical record review for functional somatic symptoms in children.

    PubMed

    Rask, Charlotte Ulrikka; Borg, Carsten; Søndergaard, Charlotte; Schulz-Pedersen, Søren; Thomsen, Per Hove; Fink, Per

    2010-04-01

    The objectives of this study were to develop and test a systematic medical record review for functional somatic symptoms (FSSs) in paediatric patients and to estimate the inter-rater reliability of paediatricians' recognition of FSSs and their associated impairments while using this method. We developed the Medical Record Review for Functional Somatic Symptoms in Children (MRFC) for retrospective medical record review. Described symptoms were categorised as probably, definitely, or not FSSs. FSS-associated impairment was also determined. Three paediatricians performed the MRFC on the medical records of 54 children with a diagnosed, well-defined physical disease and 59 with 'symptom' diagnoses. The inter-rater reliabilities of the recognition and associated impairment of FSSs were tested on 20 of these records. The MRFC allowed identification of subgroups of children with multisymptomatic FSSs, long-term FSSs, and/or impairing FSSs. The FSS inter-rater reliability was good (combined kappa=0.69) but only fair as far as associated impairment was concerned (combined kappa=0.29). In the hands of skilled paediatricians, the MRFC is a reliable method for identifying paediatric patients with diverse types of FSSs for clinical research. However, additional information is needed for reliable judgement of impairment. The method may also prove useful in clinical practice. Copyright 2010 Elsevier Inc. All rights reserved.

  1. [Vis-NIR spectroscopic pattern recognition combined with SG smoothing applied to breed screening of transgenic sugarcane].

    PubMed

    Liu, Gui-Song; Guo, Hao-Song; Pan, Tao; Wang, Ji-Hua; Cao, Gan

    2014-10-01

    Based on Savitzky-Golay (SG) smoothing screening, principal component analysis (PCA) combined with separately supervised linear discriminant analysis (LDA) and unsupervised hierarchical clustering analysis (HCA) were used for non-destructive visible and near-infrared (Vis-NIR) detection for breed screening of transgenic sugarcane. A random and stability-dependent framework of calibration, prediction, and validation was proposed. A total of 456 samples of sugarcane leaves planting in the elongating stage were collected from the field, which was composed of 306 transgenic (positive) samples containing Bt and Bar gene and 150 non-transgenic (negative) samples. A total of 156 samples (negative 50 and positive 106) were randomly selected as the validation set; the remaining samples (negative 100 and positive 200, a total of 300 samples) were used as the modeling set, and then the modeling set was subdivided into calibration (negative 50 and positive 100, a total of 150 samples) and prediction sets (negative 50 and positive 100, a total of 150 samples) for 50 times. The number of SG smoothing points was ex- panded, while some modes of higher derivative were removed because of small absolute value, and a total of 264 smoothing modes were used for screening. The pairwise combinations of first three principal components were used, and then the optimal combination of principal components was selected according to the model effect. Based on all divisions of calibration and prediction sets and all SG smoothing modes, the SG-PCA-LDA and SG-PCA-HCA models were established, the model parameters were optimized based on the average prediction effect for all divisions to produce modeling stability. Finally, the model validation was performed by validation set. With SG smoothing, the modeling accuracy and stability of PCA-LDA, PCA-HCA were signif- icantly improved. For the optimal SG-PCA-LDA model, the recognition rate of positive and negative validation samples were 94.3%, 96.0%; and were 92.5%, 98.0% for the optimal SG-PCA-LDA model, respectively. Vis-NIR spectro- scopic pattern recognition combined with SG smoothing could be used for accurate recognition of transgenic sugarcane leaves, and provided a convenient screening method for transgenic sugarcane breeding.

  2. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    DTIC Science & Technology

    2011-09-01

    be submitted into a facial recognition program for comparison with millions of possible matches, offering abundant opportunities to identify the...to leverage the robust number of comparative opportunities associated with facial recognition programs. This research investigates the efficacy of...combining composite forensic artistry with facial recognition technology to create a viable investigative tool to identify suspects, as well as better

  3. Contact-free palm-vein recognition based on local invariant features.

    PubMed

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach.

  4. Contact-Free Palm-Vein Recognition Based on Local Invariant Features

    PubMed Central

    Kang, Wenxiong; Liu, Yang; Wu, Qiuxia; Yue, Xishun

    2014-01-01

    Contact-free palm-vein recognition is one of the most challenging and promising areas in hand biometrics. In view of the existing problems in contact-free palm-vein imaging, including projection transformation, uneven illumination and difficulty in extracting exact ROIs, this paper presents a novel recognition approach for contact-free palm-vein recognition that performs feature extraction and matching on all vein textures distributed over the palm surface, including finger veins and palm veins, to minimize the loss of feature information. First, a hierarchical enhancement algorithm, which combines a DOG filter and histogram equalization, is adopted to alleviate uneven illumination and to highlight vein textures. Second, RootSIFT, a more stable local invariant feature extraction method in comparison to SIFT, is adopted to overcome the projection transformation in contact-free mode. Subsequently, a novel hierarchical mismatching removal algorithm based on neighborhood searching and LBP histograms is adopted to improve the accuracy of feature matching. Finally, we rigorously evaluated the proposed approach using two different databases and obtained 0.996% and 3.112% Equal Error Rates (EERs), respectively, which demonstrate the effectiveness of the proposed approach. PMID:24866176

  5. Automatic anatomy recognition in whole-body PET/CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Huiqian; Udupa, Jayaram K., E-mail: jay@mail.med.upenn.edu; Odhner, Dewey

    Purpose: Whole-body positron emission tomography/computed tomography (PET/CT) has become a standard method of imaging patients with various disease conditions, especially cancer. Body-wide accurate quantification of disease burden in PET/CT images is important for characterizing lesions, staging disease, prognosticating patient outcome, planning treatment, and evaluating disease response to therapeutic interventions. However, body-wide anatomy recognition in PET/CT is a critical first step for accurately and automatically quantifying disease body-wide, body-region-wise, and organwise. This latter process, however, has remained a challenge due to the lower quality of the anatomic information portrayed in the CT component of this imaging modality and the paucity ofmore » anatomic details in the PET component. In this paper, the authors demonstrate the adaptation of a recently developed automatic anatomy recognition (AAR) methodology [Udupa et al., “Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images,” Med. Image Anal. 18, 752–771 (2014)] to PET/CT images. Their goal was to test what level of object localization accuracy can be achieved on PET/CT compared to that achieved on diagnostic CT images. Methods: The authors advance the AAR approach in this work in three fronts: (i) from body-region-wise treatment in the work of Udupa et al. to whole body; (ii) from the use of image intensity in optimal object recognition in the work of Udupa et al. to intensity plus object-specific texture properties, and (iii) from the intramodality model-building-recognition strategy to the intermodality approach. The whole-body approach allows consideration of relationships among objects in different body regions, which was previously not possible. Consideration of object texture allows generalizing the previous optimal threshold-based fuzzy model recognition method from intensity images to any derived fuzzy membership image, and in the process, to bring performance to the level achieved on diagnostic CT and MR images in body-region-wise approaches. The intermodality approach fosters the use of already existing fuzzy models, previously created from diagnostic CT images, on PET/CT and other derived images, thus truly separating the modality-independent object assembly anatomy from modality-specific tissue property portrayal in the image. Results: Key ways of combining the above three basic ideas lead them to 15 different strategies for recognizing objects in PET/CT images. Utilizing 50 diagnostic CT image data sets from the thoracic and abdominal body regions and 16 whole-body PET/CT image data sets, the authors compare the recognition performance among these 15 strategies on 18 objects from the thorax, abdomen, and pelvis in object localization error and size estimation error. Particularly on texture membership images, object localization is within three voxels on whole-body low-dose CT images and 2 voxels on body-region-wise low-dose images of known true locations. Surprisingly, even on direct body-region-wise PET images, localization error within 3 voxels seems possible. Conclusions: The previous body-region-wise approach can be extended to whole-body torso with similar object localization performance. Combined use of image texture and intensity property yields the best object localization accuracy. In both body-region-wise and whole-body approaches, recognition performance on low-dose CT images reaches levels previously achieved on diagnostic CT images. The best object recognition strategy varies among objects; the proposed framework however allows employing a strategy that is optimal for each object.« less

  6. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  7. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  8. An effective method for cirrhosis recognition based on multi-feature fusion

    NASA Astrophysics Data System (ADS)

    Chen, Yameng; Sun, Gengxin; Lei, Yiming; Zhang, Jinpeng

    2018-04-01

    Liver disease is one of the main causes of human healthy problem. Cirrhosis, of course, is the critical phase during the development of liver lesion, especially the hepatoma. Many clinical cases are still influenced by the subjectivity of physicians in some degree, and some objective factors such as illumination, scale, edge blurring will affect the judgment of clinicians. Then the subjectivity will affect the accuracy of diagnosis and the treatment of patients. In order to solve the difficulty above and improve the recognition rate of liver cirrhosis, we propose a method of multi-feature fusion to obtain more robust representations of texture in ultrasound liver images, the texture features we extract include local binary pattern(LBP), gray level co-occurrence matrix(GLCM) and histogram of oriented gradient(HOG). In this paper, we firstly make a fusion of multi-feature to recognize cirrhosis and normal liver based on parallel combination concept, and the experimental results shows that the classifier is effective for cirrhosis recognition which is evaluated by the satisfying classification rate, sensitivity and specificity of receiver operating characteristic(ROC), and cost time. Through the method we proposed, it will be helpful to improve the accuracy of diagnosis of cirrhosis and prevent the development of liver lesion towards hepatoma.

  9. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  10. Prediction of TF target sites based on atomistic models of protein-DNA complexes

    PubMed Central

    Angarica, Vladimir Espinosa; Pérez, Abel González; Vasconcelos, Ana T; Collado-Vides, Julio; Contreras-Moreira, Bruno

    2008-01-01

    Background The specific recognition of genomic cis-regulatory elements by transcription factors (TFs) plays an essential role in the regulation of coordinated gene expression. Studying the mechanisms determining binding specificity in protein-DNA interactions is thus an important goal. Most current approaches for modeling TF specific recognition rely on the knowledge of large sets of cognate target sites and consider only the information contained in their primary sequence. Results Here we describe a structure-based methodology for predicting sequence motifs starting from the coordinates of a TF-DNA complex. Our algorithm combines information regarding the direct and indirect readout of DNA into an atomistic statistical model, which is used to estimate the interaction potential. We first measure the ability of our method to correctly estimate the binding specificities of eight prokaryotic and eukaryotic TFs that belong to different structural superfamilies. Secondly, the method is applied to two homology models, finding that sampling of interface side-chain rotamers remarkably improves the results. Thirdly, the algorithm is compared with a reference structural method based on contact counts, obtaining comparable predictions for the experimental complexes and more accurate sequence motifs for the homology models. Conclusion Our results demonstrate that atomic-detail structural information can be feasibly used to predict TF binding sites. The computational method presented here is universal and might be applied to other systems involving protein-DNA recognition. PMID:18922190

  11. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: Meta-analytic findings

    PubMed Central

    Ventura, Joseph; Wood, Rachel C.; Jimenez, Amy M.; Hellemann, Gerhard S.

    2014-01-01

    Background In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? Methods A meta-analysis of 102 studies (combined n = 4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Results Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r = .51). In addition, the relationship between FR and EP through voice prosody (r = .58) is as strong as the relationship between FR and EP based on facial stimuli (r = .53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality – facial stimuli and voice prosody. Discussion The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. PMID:24268469

  12. Multifeature-based high-resolution palmprint recognition.

    PubMed

    Dai, Jifeng; Zhou, Jie

    2011-05-01

    Palmprint is a promising biometric feature for use in access control and forensic applications. Previous research on palmprint recognition mainly concentrates on low-resolution (about 100 ppi) palmprints. But for high-security applications (e.g., forensic usage), high-resolution palmprints (500 ppi or higher) are required from which more useful information can be extracted. In this paper, we propose a novel recognition algorithm for high-resolution palmprint. The main contributions of the proposed algorithm include the following: 1) use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm. 2) Design of a quality-based and adaptive orientation field estimation algorithm which performs better than the existing algorithm in case of regions with a large number of creases. 3) Use of a novel fusion scheme for an identification application which performs better than conventional fusion methods, e.g., weighted sum rule, SVMs, or Neyman-Pearson rule. Besides, we analyze the discriminative power of different feature combinations and find that density is very useful for palmprint recognition. Experimental results on the database containing 14,576 full palmprints show that the proposed algorithm has achieved a good performance. In the case of verification, the recognition system's False Rejection Rate (FRR) is 16 percent, which is 17 percent lower than the best existing algorithm at a False Acceptance Rate (FAR) of 10(-5), while in the identification experiment, the rank-1 live-scan partial palmprint recognition rate is improved from 82.0 to 91.7 percent.

  13. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  14. Molecular self-recognition: a chiral [Mn(II)6] wheel via donor-acceptor π···π contacts and H-bonds.

    PubMed

    Barrios, L A; Salinas-Uber, J; Roubeau, O; Teat, S J; Aromí, G

    2015-03-18

    A multinucleating ligand capable of establishing different types of intermolecular interactions, when combined with acetate groups leads to the assembly of a chiral [Mn(II)3] cluster poised for a process of self-recognition through a combination of perfectly complementary weak forces.

  15. View Combination: A Generalization Mechanism for Visual Recognition

    ERIC Educational Resources Information Center

    Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric

    2011-01-01

    We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…

  16. Mapping monomeric threading to protein-protein structure prediction.

    PubMed

    Guerler, Aysam; Govindarajoo, Brandon; Zhang, Yang

    2013-03-25

    The key step of template-based protein-protein structure prediction is the recognition of complexes from experimental structure libraries that have similar quaternary fold. Maintaining two monomer and dimer structure libraries is however laborious, and inappropriate library construction can degrade template recognition coverage. We propose a novel strategy SPRING to identify complexes by mapping monomeric threading alignments to protein-protein interactions based on the original oligomer entries in the PDB, which does not rely on library construction and increases the efficiency and quality of complex template recognitions. SPRING is tested on 1838 nonhomologous protein complexes which can recognize correct quaternary template structures with a TM score >0.5 in 1115 cases after excluding homologous proteins. The average TM score of the first model is 60% and 17% higher than that by HHsearch and COTH, respectively, while the number of targets with an interface RMSD <2.5 Å by SPRING is 134% and 167% higher than these competing methods. SPRING is controlled with ZDOCK on 77 docking benchmark proteins. Although the relative performance of SPRING and ZDOCK depends on the level of homology filters, a combination of the two methods can result in a significantly higher model quality than ZDOCK at all homology thresholds. These data demonstrate a new efficient approach to quaternary structure recognition that is ready to use for genome-scale modeling of protein-protein interactions due to the high speed and accuracy.

  17. Evaluation of accelerometer based multi-sensor versus single-sensor activity recognition systems.

    PubMed

    Gao, Lei; Bourke, A K; Nelson, John

    2014-06-01

    Physical activity has a positive impact on people's well-being and it had been shown to decrease the occurrence of chronic diseases in the older adult population. To date, a substantial amount of research studies exist, which focus on activity recognition using inertial sensors. Many of these studies adopt a single sensor approach and focus on proposing novel features combined with complex classifiers to improve the overall recognition accuracy. In addition, the implementation of the advanced feature extraction algorithms and the complex classifiers exceed the computing ability of most current wearable sensor platforms. This paper proposes a method to adopt multiple sensors on distributed body locations to overcome this problem. The objective of the proposed system is to achieve higher recognition accuracy with "light-weight" signal processing algorithms, which run on a distributed computing based sensor system comprised of computationally efficient nodes. For analysing and evaluating the multi-sensor system, eight subjects were recruited to perform eight normal scripted activities in different life scenarios, each repeated three times. Thus a total of 192 activities were recorded resulting in 864 separate annotated activity states. The methods for designing such a multi-sensor system required consideration of the following: signal pre-processing algorithms, sampling rate, feature selection and classifier selection. Each has been investigated and the most appropriate approach is selected to achieve a trade-off between recognition accuracy and computing execution time. A comparison of six different systems, which employ single or multiple sensors, is presented. The experimental results illustrate that the proposed multi-sensor system can achieve an overall recognition accuracy of 96.4% by adopting the mean and variance features, using the Decision Tree classifier. The results demonstrate that elaborate classifiers and feature sets are not required to achieve high recognition accuracies on a multi-sensor system. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  19. The association between reconstructed phase space and Artificial Neural Networks for vectorcardiographic recognition of myocardial infarction.

    PubMed

    Costa, Cecília M; Silva, Ittalo S; de Sousa, Rafael D; Hortegal, Renato A; Regis, Carlos Danilo M

    Myocardial infarction is one of the leading causes of death worldwide. As it is life threatening, it requires an immediate and precise treatment. Due to this, a growing number of research and innovations in the field of biomedical signal processing is in high demand. This paper proposes the association of Reconstructed Phase Space and Artificial Neural Networks for Vectorcardiography Myocardial Infarction Recognition. The algorithm promotes better results for the box size 10 × 10 and the combination of four parameters: box counting (Vx), box counting (Vz), self-similarity method (Vx) and self-similarity method (Vy) with sensitivity = 92%, specificity = 96% and accuracy = 94%. The topographic diagnosis presented different performances for different types of infarctions with better results for anterior wall infarctions and less accurate results for inferior infarctions. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Turning tryptophanase into odor-generating biosensors.

    PubMed

    Xu, Yaqin; Zhang, Zhuyuan; Ali, M Monsur; Sauder, Joanna; Deng, Xudong; Giang, Karen; Aguirre, Sergio D; Pelton, Robert; Li, Yingfu; Filipe, Carlos D M

    2014-03-03

    An odor-based sensor system that exploits the metabolic enzyme tryptophanase (TPase) as the key component is reported. This enzyme is able to convert an odorless substrate like S-methyl-L-cysteine or L-tryptophan into the odorous products methyl mercaptan or indole. To make a biosensor, TPase was biotinylated so that it could be coupled with a molecular recognition element, such as an antibody, to develop an ELISA-like assay. This method was used for the detection of an antibody present in nM concentrations by the human nose. TPase can also be combined with the enzyme pyridoxal kinase (PKase) for use in a coupled assay to detect adenosine 5'-triphosphate (ATP). When ATP is present in the low μM concentration range, the coupled enzymatic system generates an odor that is easily detectable by the human nose. Biotinylated TPase can be combined with various biotin-labeled molecular recognition elements, thereby enabling a broad range of applications for this odor-based reporting system. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  2. Building Hierarchical Representations for Oracle Character and Sketch Recognition.

    PubMed

    Jun Guo; Changhu Wang; Roman-Rangel, Edgar; Hongyang Chao; Yong Rui

    2016-01-01

    In this paper, we study oracle character recognition and general sketch recognition. First, a data set of oracle characters, which are the oldest hieroglyphs in China yet remain a part of modern Chinese characters, is collected for analysis. Second, typical visual representations in shape- and sketch-related works are evaluated. We analyze the problems suffered when addressing these representations and determine several representation design criteria. Based on the analysis, we propose a novel hierarchical representation that combines a Gabor-related low-level representation and a sparse-encoder-related mid-level representation. Extensive experiments show the effectiveness of the proposed representation in both oracle character recognition and general sketch recognition. The proposed representation is also complementary to convolutional neural network (CNN)-based models. We introduce a solution to combine the proposed representation with CNN-based models, and achieve better performances over both approaches. This solution has beaten humans at recognizing general sketches.

  3. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  4. Robust Tomato Recognition for Robotic Harvesting Using Feature Images Fusion

    PubMed Central

    Zhao, Yuanshen; Gong, Liang; Huang, Yixiang; Liu, Chengliang

    2016-01-01

    Automatic recognition of mature fruits in a complex agricultural environment is still a challenge for an autonomous harvesting robot due to various disturbances existing in the background of the image. The bottleneck to robust fruit recognition is reducing influence from two main disturbances: illumination and overlapping. In order to recognize the tomato in the tree canopy using a low-cost camera, a robust tomato recognition algorithm based on multiple feature images and image fusion was studied in this paper. Firstly, two novel feature images, the  a*-component image and the I-component image, were extracted from the L*a*b* color space and luminance, in-phase, quadrature-phase (YIQ) color space, respectively. Secondly, wavelet transformation was adopted to fuse the two feature images at the pixel level, which combined the feature information of the two source images. Thirdly, in order to segment the target tomato from the background, an adaptive threshold algorithm was used to get the optimal threshold. The final segmentation result was processed by morphology operation to reduce a small amount of noise. In the detection tests, 93% target tomatoes were recognized out of 200 overall samples. It indicates that the proposed tomato recognition method is available for robotic tomato harvesting in the uncontrolled environment with low cost. PMID:26840313

  5. Jersey number detection in sports video for athlete identification

    NASA Astrophysics Data System (ADS)

    Ye, Qixiang; Huang, Qingming; Jiang, Shuqiang; Liu, Yang; Gao, Wen

    2005-07-01

    Athlete identification is important for sport video content analysis since users often care about the video clips with their preferred athletes. In this paper, we propose a method for athlete identification by combing the segmentation, tracking and recognition procedures into a coarse-to-fine scheme for jersey number (digital characters on sport shirt) detection. Firstly, image segmentation is employed to separate the jersey number regions with its background. And size/pipe-like attributes of digital characters are used to filter out candidates. Then, a K-NN (K nearest neighbor) classifier is employed to classify a candidate into a digit in "0-9" or negative. In the recognition procedure, we use the Zernike moment features, which are invariant to rotation and scale for digital shape recognition. Synthetic training samples with different fonts are used to represent the pattern of digital characters with non-rigid deformation. Once a character candidate is detected, a SSD (smallest square distance)-based tracking procedure is started. The recognition procedure is performed every several frames in the tracking process. After tracking tens of frames, the overall recognition results are combined to determine if a candidate is a true jersey number or not by a voting procedure. Experiments on several types of sports video shows encouraging result.

  6. Natural user interface as a supplement of the holographic Raman tweezers

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  7. Face recognition using 3D facial shape and color map information: comparison and combination

    NASA Astrophysics Data System (ADS)

    Godil, Afzal; Ressler, Sandy; Grother, Patrick

    2004-08-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  8. Three-dimensional deformable-model-based localization and recognition of road vehicles.

    PubMed

    Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong

    2012-01-01

    We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.

  9. Traffic sign recognition based on deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Yin, Shi-hao; Deng, Ji-cai; Zhang, Da-wei; Du, Jing-yuan

    2017-11-01

    Traffic sign recognition (TSR) is an important component of automated driving systems. It is a rather challenging task to design a high-performance classifier for the TSR system. In this paper, we propose a new method for TSR system based on deep convolutional neural network. In order to enhance the expression of the network, a novel structure (dubbed block-layer below) which combines network-in-network and residual connection is designed. Our network has 10 layers with parameters (block-layer seen as a single layer): the first seven are alternate convolutional layers and block-layers, and the remaining three are fully-connected layers. We train our TSR network on the German traffic sign recognition benchmark (GTSRB) dataset. To reduce overfitting, we perform data augmentation on the training images and employ a regularization method named "dropout". The activation function we employ in our network adopts scaled exponential linear units (SELUs), which can induce self-normalizing properties. To speed up the training, we use an efficient GPU to accelerate the convolutional operation. On the test dataset of GTSRB, we achieve the accuracy rate of 99.67%, exceeding the state-of-the-art results.

  10. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  11. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    PubMed

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  12. Determination of geographical origin of alcoholic beverages using ultraviolet, visible and infrared spectroscopy: A review

    NASA Astrophysics Data System (ADS)

    Uríčková, Veronika; Sádecká, Jana

    2015-09-01

    The identification of the geographical origin of beverages is one of the most important issues in food chemistry. Spectroscopic methods provide a relative rapid and low cost alternative to traditional chemical composition or sensory analyses. This paper reviews the current state of development of ultraviolet (UV), visible (Vis), near infrared (NIR) and mid infrared (MIR) spectroscopic techniques combined with pattern recognition methods for determining geographical origin of both wines and distilled drinks. UV, Vis, and NIR spectra contain broad band(s) with weak spectral features limiting their discrimination ability. Despite this expected shortcoming, each of the three spectroscopic ranges (NIR, Vis/NIR and UV/Vis/NIR) provides average correct classification higher than 82%. Although average correct classification is similar for NIR and MIR regions, in some instances MIR data processing improves prediction. Advantage of using MIR is that MIR peaks are better defined and more easily assigned than NIR bands. In general, success in a classification depends on both spectral range and pattern recognition methods. The main problem still remains the construction of databanks needed for all of these methods.

  13. A simulation study of harmonics regeneration in noise reduction for electric and acoustic stimulation.

    PubMed

    Hu, Yi

    2010-05-01

    Recent research results show that combined electric and acoustic stimulation (EAS) significantly improves speech recognition in noise, and it is generally established that access to the improved F0 representation of target speech, along with the glimpse cues, provide the EAS benefits. Under noisy listening conditions, noise signals degrade these important cues by introducing undesired temporal-frequency components and corrupting harmonics structure. In this study, the potential of combining noise reduction and harmonics regeneration techniques was investigated to further improve speech intelligibility in noise by providing improved beneficial cues for EAS. Three hypotheses were tested: (1) noise reduction methods can improve speech intelligibility in noise for EAS; (2) harmonics regeneration after noise reduction can further improve speech intelligibility in noise for EAS; and (3) harmonics sideband constraints in frequency domain (or equivalently, amplitude modulation in temporal domain), even deterministic ones, can provide additional benefits. Test results demonstrate that combining noise reduction and harmonics regeneration can significantly improve speech recognition in noise for EAS, and it is also beneficial to preserve the harmonics sidebands under adverse listening conditions. This finding warrants further work into the development of algorithms that regenerate harmonics and the related sidebands for EAS processing under noisy conditions.

  14. The effect of product characteristic familiarity on product recognition

    NASA Astrophysics Data System (ADS)

    Yang, Cheng; An, Fang; Chen, Chen; Zhu, Bin

    2017-09-01

    In order to explore the effect of product appearance characteristic familiarity on product recognition, both EEG experiment and questionnaire evaluation are used in this research. The objective feedback of user is obtained through the EEG experiment and the subjective opinions are collected through the questionnaires. The EEG experiment is combined with the classical learning-recognition paradigm, and the old-new effect of recognition experiment is used as a metric of recognition degree. Experimental results show that the difference of characteristic familiarity does have a significant effect on product recognition. The conclusion can be used in innovation design.

  15. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  16. Color images of Kansas subsurface geology from well logs

    USGS Publications Warehouse

    Collins, D.R.; Doveton, J.H.

    1986-01-01

    Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.

  17. Combined qualitative and quantitative research designs.

    PubMed

    Seymour, Jane

    2012-12-01

    Mixed methods research designs have been recognized as important in addressing complexity and are recommended particularly in the development and evaluation of complex interventions. This article reports a review of studies in palliative care published between 2010 and March 2012 that combine qualitative and quantitative approaches. A synthesis of approaches to mixed methods research taken in 28 examples of published research studies of relevance to palliative and supportive care is provided, using a typology based on a classic categorization put forward in 1992. Mixed-method studies are becoming more frequently employed in palliative care research and resonate with the complexity of the palliative care endeavour. Undertaking mixed methods research requires a sophisticated understanding of the research process and recognition of some of the underlying complexities encountered when working with different traditions and perspectives on issues of: sampling, validity, reliability and rigour, different sources of data and different data collection and analysis techniques.

  18. Neuroscience-inspired computational systems for speech recognition under noisy conditions

    NASA Astrophysics Data System (ADS)

    Schafer, Phillip B.

    Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes advantage of the neural representation's invariance in noise. The scheme centers on a speech similarity measure based on the longest common subsequence between spike sequences. The combined encoding and decoding scheme outperforms a benchmark system in extremely noisy acoustic conditions. Finally, I consider methods for decoding spike representations of continuous speech. To help guide the alignment of templates to words, I design a syllable detection scheme that robustly marks the locations of syllabic nuclei. The scheme combines SVM-based training with a peak selection algorithm designed to improve noise tolerance. By incorporating syllable information into the ASR system, I obtain strong recognition results in noisy conditions, although the performance in noiseless conditions is below the state of the art. The work presented here constitutes a novel approach to the problem of ASR that can be applied in the many challenging acoustic environments in which we use computer technologies today. The proposed spike-based processing methods can potentially be exploited in effcient hardware implementations and could significantly reduce the computational costs of ASR. The work also provides a framework for understanding the advantages of spike-based acoustic coding in the human brain.

  19. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  20. SVD compression for magnetic resonance fingerprinting in the time domain.

    PubMed

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  1. Automatic gang graffiti recognition and interpretation

    NASA Astrophysics Data System (ADS)

    Parra, Albert; Boutin, Mireille; Delp, Edward J.

    2017-09-01

    One of the roles of emergency first responders (e.g., police and fire departments) is to prevent and protect against events that can jeopardize the safety and well-being of a community. In the case of criminal gang activity, tools are needed for finding, documenting, and taking the necessary actions to mitigate the problem or issue. We describe an integrated mobile-based system capable of using location-based services, combined with image analysis, to track and analyze gang activity through the acquisition, indexing, and recognition of gang graffiti images. This approach uses image analysis methods for color recognition, image segmentation, and image retrieval and classification. A database of gang graffiti images is described that includes not only the images but also metadata related to the images, such as date and time, geoposition, gang, gang member, colors, and symbols. The user can then query the data in a useful manner. We have implemented these features both as applications for Android and iOS hand-held devices and as a web-based interface.

  2. SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain

    PubMed Central

    McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.

    2016-01-01

    Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380

  3. Analyzing handwriting biometrics in metadata context

    NASA Astrophysics Data System (ADS)

    Scheidat, Tobias; Wolf, Franziska; Vielhauer, Claus

    2006-02-01

    In this article, methods for user recognition by online handwriting are experimentally analyzed using a combination of demographic data of users in relation to their handwriting habits. Online handwriting as a biometric method is characterized by having high variations of characteristics that influences the reliance and security of this method. These variations have not been researched in detail so far. Especially in cross-cultural application it is urgent to reveal the impact of personal background to security aspects in biometrics. Metadata represent the background of writers, by introducing cultural, biological and conditional (changing) aspects like fist language, country of origin, gender, handedness, experiences the influence handwriting and language skills. The goal is the revelation of intercultural impacts on handwriting in order to achieve higher security in biometrical systems. In our experiments, in order to achieve a relatively high coverage, 48 different handwriting tasks have been accomplished by 47 users from three countries (Germany, India and Italy) have been investigated with respect to the relations of metadata and biometric recognition performance. For this purpose, hypotheses have been formulated and have been evaluated using the measurement of well-known recognition error rates from biometrics. The evaluation addressed both: system reliance and security threads by skilled forgeries. For the later purpose, a novel forgery type is introduced, which applies the personal metadata to security aspects and includes new methods of security tests. Finally in our paper, we formulate recommendations for specific user groups and handwriting samples.

  4. Phage diabody repertoires for selection of large numbers of bispecific antibody fragments.

    PubMed

    McGuinness, B T; Walter, G; FitzGerald, K; Schuler, P; Mahoney, W; Duncan, A R; Hoogenboom, H R

    1996-09-01

    Methods for the generation of large numbers of different bispecific antibodies are presented. Cloning strategies are detailed to create repertoires of bispecific diabody molecules with variability on one or both of the antigen binding sites. This diabody format, when combined with the power of phage display technology, allows the generation and analysis of thousands of different bispecific molecules. Selection for binding presumably also selects for more stable diabodies. Phage diabody libraries enable screening or selection of the best combination bispecific molecule with regards to affinity of binding, epitope recognition and pairing before manufacture of the best candidate.

  5. Recognizing of stereotypic patterns in epileptic EEG using empirical modes and wavelets

    NASA Astrophysics Data System (ADS)

    Grubov, V. V.; Sitnikova, E.; Pavlov, A. N.; Koronovskii, A. A.; Hramov, A. E.

    2017-11-01

    Epileptic activity in the form of spike-wave discharges (SWD) appears in the electroencephalogram (EEG) during absence seizures. This paper evaluates two approaches for detecting stereotypic rhythmic activities in EEG, i.e., the continuous wavelet transform (CWT) and the empirical mode decomposition (EMD). The CWT is a well-known method of time-frequency analysis of EEG, whereas EMD is a relatively novel approach for extracting signal's waveforms. A new method for pattern recognition based on combination of CWT and EMD is proposed. It was found that this combined approach resulted to the sensitivity of 86.5% and specificity of 92.9% for sleep spindles and 97.6% and 93.2% for SWD, correspondingly. Considering strong within- and between-subjects variability of sleep spindles, the obtained efficiency in their detection was high in comparison with other methods based on CWT. It is concluded that the combination of a wavelet-based approach and empirical modes increases the quality of automatic detection of stereotypic patterns in rat's EEG.

  6. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    PubMed

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Huichao; Shi, Jianhong; Liu, Xialin; Niu, Zhouzhou; Zeng, Guihua

    2018-04-01

    Single-pixel imaging has emerged over recent years as a novel imaging technique, which has significant application prospects. In this paper, we propose and experimentally demonstrate a scheme that can achieve single-pixel non-imaging object recognition by acquiring the Fourier spectrum. In an experiment, a four-step phase-shifting sinusoid illumination light is used to irradiate the object image, the value of the light intensity is measured with a single-pixel detection unit, and the Fourier coefficients of the object image are obtained by a differential measurement. The Fourier coefficients are first cast into binary numbers to obtain the hash value. We propose a new method of perceptual hashing algorithm, which is combined with a discrete Fourier transform to calculate the hash value. The hash distance is obtained by calculating the difference of the hash value between the object image and the contrast images. By setting an appropriate threshold, the object image can be quickly and accurately recognized. The proposed scheme realizes single-pixel non-imaging perceptual hashing object recognition by using fewer measurements. Our result might open a new path for realizing object recognition with non-imaging.

  8. Complete Vision-Based Traffic Sign Recognition Supported by an I2V Communication System

    PubMed Central

    García-Garrido, Miguel A.; Ocaña, Manuel; Llorca, David F.; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance. PMID:22438704

  9. Complete vision-based traffic sign recognition supported by an I2V communication system.

    PubMed

    García-Garrido, Miguel A; Ocaña, Manuel; Llorca, David F; Arroyo, Estefanía; Pozuelo, Jorge; Gavilán, Miguel

    2012-01-01

    This paper presents a complete traffic sign recognition system based on vision sensor onboard a moving vehicle which detects and recognizes up to one hundred of the most important road signs, including circular and triangular signs. A restricted Hough transform is used as detection method from the information extracted in contour images, while the proposed recognition system is based on Support Vector Machines (SVM). A novel solution to the problem of discarding detected signs that do not pertain to the host road is proposed. For that purpose infrastructure-to-vehicle (I2V) communication and a stereo vision sensor are used. Furthermore, the outputs provided by the vision sensor and the data supplied by the CAN Bus and a GPS sensor are combined to obtain the global position of the detected traffic signs, which is used to identify a traffic sign in the I2V communication. This paper presents plenty of tests in real driving conditions, both day and night, in which an average detection rate over 95% and an average recognition rate around 93% were obtained with an average runtime of 35 ms that allows real-time performance.

  10. Design and application of pulse information acquisition and analysis system with dynamic recognition in traditional Chinese medicine.

    PubMed

    Zhang, Jian; Niu, Xin; Yang, Xue-zhi; Zhu, Qing-wen; Li, Hai-yan; Wang, Xuan; Zhang, Zhi-guo; Sha, Hong

    2014-09-01

    To design the pulse information which includes the parameter of pulse-position, pulse-number, pulse-shape and pulse-force acquisition and analysis system with function of dynamic recognition, and research the digitalization and visualization of some common cardiovascular mechanism of single pulse. To use some flexible sensors to catch the radial artery pressure pulse wave and utilize the high frequency B mode ultrasound scanning technology to synchronously obtain the information of radial extension and axial movement, by the way of dynamic images, then the gathered information was analyzed and processed together with ECG. Finally, the pulse information acquisition and analysis system was established which has the features of visualization and dynamic recognition, and it was applied to serve for ten healthy adults. The new system overcome the disadvantage of one-dimensional pulse information acquisition and process method which was common used in current research area of pulse diagnosis in traditional Chinese Medicine, initiated a new way of pulse diagnosis which has the new features of dynamic recognition, two-dimensional information acquisition, multiplex signals combination and deep data mining. The newly developed system could translate the pulse signals into digital, visual and measurable motion information of vessel.

  11. Spoof Detection for Finger-Vein Recognition System Using NIR Camera.

    PubMed

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-10-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.

  12. Spoof Detection for Finger-Vein Recognition System Using NIR Camera

    PubMed Central

    Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung

    2017-01-01

    Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031

  13. An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images.

    PubMed

    Gumaei, Abdu; Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed

    2018-05-15

    Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang's method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used.

  14. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery.

    PubMed

    Roldan, Stephanie M

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.

  15. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery

    PubMed Central

    Roldan, Stephanie M.

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538

  16. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  17. Recognition of chemical entities: combining dictionary-based and grammar-based approaches.

    PubMed

    Akhondi, Saber A; Hettne, Kristina M; van der Horst, Eelke; van Mulligen, Erik M; Kors, Jan A

    2015-01-01

    The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming any of the individual systems that we considered. The system is able to provide structure information for most of the compounds that are found. Improved tokenization and better recognition of specific entity types is likely to further improve system performance.

  18. Recognition of chemical entities: combining dictionary-based and grammar-based approaches

    PubMed Central

    2015-01-01

    Background The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. Results The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. Conclusions We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming any of the individual systems that we considered. The system is able to provide structure information for most of the compounds that are found. Improved tokenization and better recognition of specific entity types is likely to further improve system performance. PMID:25810767

  19. Improved localization of cellular membrane receptors using combined fluorescence microscopy and simultaneous topography and recognition imaging

    NASA Astrophysics Data System (ADS)

    Duman, M.; Pfleger, M.; Zhu, R.; Rankl, C.; Chtcheglova, L. A.; Neundlinger, I.; Bozna, B. L.; Mayer, B.; Salio, M.; Shepherd, D.; Polzella, P.; Moertelmaier, M.; Kada, G.; Ebner, A.; Dieudonne, M.; Schütz, G. J.; Cerundolo, V.; Kienberger, F.; Hinterdorfer, P.

    2010-03-01

    The combination of fluorescence microscopy and atomic force microscopy has a great potential in single-molecule-detection applications, overcoming many of the limitations coming from each individual technique. Here we present a new platform of combined fluorescence and simultaneous topography and recognition imaging (TREC) for improved localization of cellular receptors. Green fluorescent protein (GFP) labeled human sodium-glucose cotransporter (hSGLT1) expressed Chinese Hamster Ovary (CHO) cells and endothelial cells (MyEnd) from mouse myocardium stained with phalloidin-rhodamine were used as cell systems to study AFM topography and fluorescence microscopy on the same surface area. Topographical AFM images revealed membrane features such as lamellipodia, cytoskeleton fibers, F-actin filaments and small globular structures with heights ranging from 20 to 30 nm. Combined fluorescence and TREC imaging was applied to detect density, distribution and localization of YFP-labeled CD1d molecules on α-galactosylceramide (αGalCer)-loaded THP1 cells. While the expression level, distribution and localization of CD1d molecules on THP1 cells were detected with fluorescence microscopy, the nanoscale distribution of binding sites was investigated with molecular recognition imaging by using a chemically modified AFM tip. Using TREC on the inverted light microscope, the recognition sites of cell receptors were detected in recognition images with domain sizes ranging from ~ 25 to ~ 160 nm, with the smaller domains corresponding to a single CD1d molecule.

  20. Improved localization of cellular membrane receptors using combined fluorescence microscopy and simultaneous topography and recognition imaging.

    PubMed

    Duman, M; Pfleger, M; Zhu, R; Rankl, C; Chtcheglova, L A; Neundlinger, I; Bozna, B L; Mayer, B; Salio, M; Shepherd, D; Polzella, P; Moertelmaier, M; Kada, G; Ebner, A; Dieudonne, M; Schütz, G J; Cerundolo, V; Kienberger, F; Hinterdorfer, P

    2010-03-19

    The combination of fluorescence microscopy and atomic force microscopy has a great potential in single-molecule-detection applications, overcoming many of the limitations coming from each individual technique. Here we present a new platform of combined fluorescence and simultaneous topography and recognition imaging (TREC) for improved localization of cellular receptors. Green fluorescent protein (GFP) labeled human sodium-glucose cotransporter (hSGLT1) expressed Chinese Hamster Ovary (CHO) cells and endothelial cells (MyEnd) from mouse myocardium stained with phalloidin-rhodamine were used as cell systems to study AFM topography and fluorescence microscopy on the same surface area. Topographical AFM images revealed membrane features such as lamellipodia, cytoskeleton fibers, F-actin filaments and small globular structures with heights ranging from 20 to 30 nm. Combined fluorescence and TREC imaging was applied to detect density, distribution and localization of YFP-labeled CD1d molecules on alpha-galactosylceramide (alphaGalCer)-loaded THP1 cells. While the expression level, distribution and localization of CD1d molecules on THP1 cells were detected with fluorescence microscopy, the nanoscale distribution of binding sites was investigated with molecular recognition imaging by using a chemically modified AFM tip. Using TREC on the inverted light microscope, the recognition sites of cell receptors were detected in recognition images with domain sizes ranging from approximately 25 to approximately 160 nm, with the smaller domains corresponding to a single CD1d molecule.

  1. Authorship Attribution of Short Messages Using Multimodal Features

    DTIC Science & Technology

    2011-03-01

    demodulation algorithm, but does say that it has to be able to handle two multipath 27 signals of equal power received at up to 16 µs apart. This...possible with appropriate normalization of the data. The fields of biometrics, image analysis, and handwriting analysis also use diverse feature sets...Methods of Combining Multiple Classifiers and Their Applications to Handwriting Recognition,” IEEE Transactions on Systems, Man, and Cybernetics

  2. Applying Gradient Descent in Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  3. Optimal frame-by-frame result combination strategy for OCR in video stream

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Lynchenko, Aleksander; Krivtsov, Valeriy

    2018-04-01

    This paper describes the problem of combining classification results of multiple observations of one object. This task can be regarded as a particular case of a decision-making using a combination of experts votes with calculated weights. The accuracy of various methods of combining the classification results depending on different models of input data is investigated on the example of frame-by-frame character recognition in a video stream. Experimentally it is shown that the strategy of choosing a single most competent expert in case of input data without irrelevant observations has an advantage (in this case irrelevant means with character localization and segmentation errors). At the same time this work demonstrates the advantage of combining several most competent experts according to multiplication rule or voting if irrelevant samples are present in the input data.

  4. [A novel method of multi-channel feature extraction combining multivariate autoregression and multiple-linear principal component analysis].

    PubMed

    Wang, Jinjia; Zhang, Yanna

    2015-02-01

    Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.

  5. Document Form and Character Recognition using SVM

    NASA Astrophysics Data System (ADS)

    Park, Sang-Sung; Shin, Young-Geun; Jung, Won-Kyo; Ahn, Dong-Kyu; Jang, Dong-Sik

    2009-08-01

    Because of development of computer and information communication, EDI (Electronic Data Interchange) has been developing. There is OCR (Optical Character Recognition) of Pattern recognition technology for EDI. OCR contributed to changing many manual in the past into automation. But for the more perfect database of document, much manual is needed for excluding unnecessary recognition. To resolve this problem, we propose document form based character recognition method in this study. Proposed method is divided into document form recognition part and character recognition part. Especially, in character recognition, change character into binarization by using SVM algorithm and extract more correct feature value.

  6. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  7. Emotion recognition from EEG using higher order crossings.

    PubMed

    Petrantonakis, Panagiotis C; Hadjileontiadis, Leontios J

    2010-03-01

    Electroencephalogram (EEG)-based emotion recognition is a relatively new field in the affective computing area with challenging issues regarding the induction of the emotional states and the extraction of the features in order to achieve optimum classification performance. In this paper, a novel emotion evocation and EEG-based feature extraction technique is presented. In particular, the mirror neuron system concept was adapted to efficiently foster emotion induction by the process of imitation. In addition, higher order crossings (HOC) analysis was employed for the feature extraction scheme and a robust classification method, namely HOC-emotion classifier (HOC-EC), was implemented testing four different classifiers [quadratic discriminant analysis (QDA), k-nearest neighbor, Mahalanobis distance, and support vector machines (SVMs)], in order to accomplish efficient emotion recognition. Through a series of facial expression image projection, EEG data have been collected by 16 healthy subjects using only 3 EEG channels, namely Fp1, Fp2, and a bipolar channel of F3 and F4 positions according to 10-20 system. Two scenarios were examined using EEG data from a single-channel and from combined-channels, respectively. Compared with other feature extraction methods, HOC-EC appears to outperform them, achieving a 62.3% (using QDA) and 83.33% (using SVM) classification accuracy for the single-channel and combined-channel cases, respectively, differentiating among the six basic emotions, i.e., happiness, surprise, anger, fear, disgust, and sadness. As the emotion class-set reduces its dimension, the HOC-EC converges toward maximum classification rate (100% for five or less emotions), justifying the efficiency of the proposed approach. This could facilitate the integration of HOC-EC in human machine interfaces, such as pervasive healthcare systems, enhancing their affective character and providing information about the user's emotional status (e.g., identifying user's emotion experiences, recurring affective states, time-dependent emotional trends).

  8. Speaker-independent phoneme recognition with a binaural auditory image model

    NASA Astrophysics Data System (ADS)

    Francis, Keith Ivan

    1997-09-01

    This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.

  9. Detection of sunn pest-damaged wheat samples using visible/near-infrared spectroscopy based on pattern recognition.

    PubMed

    Basati, Zahra; Jamshidi, Bahareh; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-05-30

    The presence of sunn pest-damaged grains in wheat mass reduces the quality of flour and bread produced from it. Therefore, it is essential to assess the quality of the samples in collecting and storage centers of wheat and flour mills. In this research, the capability of visible/near-infrared (Vis/NIR) spectroscopy combined with pattern recognition methods was investigated for discrimination of wheat samples with different percentages of sunn pest-damaged. To this end, various samples belonging to five classes (healthy and 5%, 10%, 15% and 20% unhealthy) were analyzed using Vis/NIR spectroscopy (wavelength range of 350-1000 nm) based on both supervised and unsupervised pattern recognition methods. Principal component analysis (PCA) and hierarchical cluster analysis (HCA) as the unsupervised techniques and soft independent modeling of class analogies (SIMCA) and partial least squares-discriminant analysis (PLS-DA) as supervised methods were used. The results showed that Vis/NIR spectra of healthy samples were correctly clustered using both PCA and HCA. Due to the high overlapping between the four unhealthy classes (5%, 10%, 15% and 20%), it was not possible to discriminate all the unhealthy samples in individual classes. However, when considering only the two main categories of healthy and unhealthy, an acceptable degree of separation between the classes can be obtained after classification with supervised pattern recognition methods of SIMCA and PLS-DA. SIMCA based on PCA modeling correctly classified samples in two classes of healthy and unhealthy with classification accuracy of 100%. Moreover, the power of the wavelengths of 839 nm, 918 nm and 995 nm were more than other wavelengths to discriminate two classes of healthy and unhealthy. It was also concluded that PLS-DA provides excellent classification results of healthy and unhealthy samples (R 2  = 0.973 and RMSECV = 0.057). Therefore, Vis/NIR spectroscopy based on pattern recognition techniques can be useful for rapid distinguishing the healthy wheat samples from those damaged by sunn pest in the maintenance and processing centers. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    PubMed Central

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2014-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion. PMID:25422534

  11. Speaker-sensitive emotion recognition via ranking: Studies on acted and spontaneous speech☆

    PubMed

    Cao, Houwei; Verma, Ragini; Nenkova, Ani

    2015-01-01

    We introduce a ranking approach for emotion recognition which naturally incorporates information about the general expressivity of speakers. We demonstrate that our approach leads to substantial gains in accuracy compared to conventional approaches. We train ranking SVMs for individual emotions, treating the data from each speaker as a separate query, and combine the predictions from all rankers to perform multi-class prediction. The ranking method provides two natural benefits. It captures speaker specific information even in speaker-independent training/testing conditions. It also incorporates the intuition that each utterance can express a mix of possible emotion and that considering the degree to which each emotion is expressed can be productively exploited to identify the dominant emotion. We compare the performance of the rankers and their combination to standard SVM classification approaches on two publicly available datasets of acted emotional speech, Berlin and LDC, as well as on spontaneous emotional data from the FAU Aibo dataset. On acted data, ranking approaches exhibit significantly better performance compared to SVM classification both in distinguishing a specific emotion from all others and in multi-class prediction. On the spontaneous data, which contains mostly neutral utterances with a relatively small portion of less intense emotional utterances, ranking-based classifiers again achieve much higher precision in identifying emotional utterances than conventional SVM classifiers. In addition, we discuss the complementarity of conventional SVM and ranking-based classifiers. On all three datasets we find dramatically higher accuracy for the test items on whose prediction the two methods agree compared to the accuracy of individual methods. Furthermore on the spontaneous data the ranking and standard classification are complementary and we obtain marked improvement when we combine the two classifiers by late-stage fusion.

  12. The prediction of human exons by oligonucleotide composition and discriminant analysis of spliceable open reading frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solovyev, V.V.; Salamov, A.A.; Lawrence, C.B.

    1994-12-31

    Discriminant analysis is applied to the problem of recognition 5`-, internal and 3`-exons in human DNA sequences. Specific recognition functions were developed for revealing exons of particular types. The method based on a splice site prediction algorithm that uses the linear Fisher discriminant to combine the information about significant triplet frequencies of various functional parts of splice site regions and preferences of oligonucleotide in protein coding and nation regions. The accuracy of our splice site recognition function is about 97%. A discriminant function for 5`-exon prediction includes hexanucleotide composition of upstream region, triplet composition around the ATG codon, ORF codingmore » potential, donor splice site potential and composition of downstream introit region. For internal exon prediction, we combine in a discriminant function the characteristics describing the 5`- intron region, donor splice site, coding region, acceptor splice site and Y-intron region for each open reading frame flanked by GT and AG base pairs. The accuracy of precise internal exon recognition on a test set of 451 exon and 246693 pseudoexon sequences is 77% with a specificity of 79% and a level of pseudoexon ORF prediction of 99.96%. The recognition quality computed at the level of individual nucleotides is 89%, for exon sequences and 98% for intron sequences. A discriminant function for 3`-exon prediction includes octanucleolide composition of upstream nation region, triplet composition around the stop codon, ORF coding potential, acceptor splice site potential and hexanucleotide composition of downstream region. We unite these three discriminant functions in exon predicting program FEX (find exons). FEX exactly predicts 70% of 1016 exons from the test of 181 complete genes with specificity 73%, and 89% exons are exactly or partially predicted. On the average, 85% of nucleotides were predicted accurately with specificity 91%.« less

  13. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  14. Postprocessing for character recognition using pattern features and linguistic information

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi

    1993-04-01

    We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).

  15. Effects of WDRC release time and number of channels on output SNR and speech recognition

    PubMed Central

    Alexander, Joshua M.; Masterson, Katie

    2014-01-01

    Objectives The purpose of this study was to investigate the joint effects that wide dynamic range compression (WDRC) release time (RT) and number of channels have on recognition of sentences in the presence of steady and modulated maskers at different signal-to-noise ratios (SNRs). How the different combinations of WDRC parameters affect output SNR and the role this plays in the observed findings was also investigated. Design Twenty-four listeners with mild to moderate sensorineural hearing loss identified sentences mixed with steady or modulated maskers at 3 SNRs (−5, 0, +5 dB) that had been processed using a hearing aid simulator with 6 combinations of RT (40 and 640 ms) and number of channels (4, 8, and 16). Compression parameters were set using the Desired Sensation Level v5.0a prescriptive fitting method. For each condition, amplified speech and masker levels and the resultant long-term output SNR were measured. Results Speech recognition with WDRC depended on the combination of RT and number of channels, with the greatest effects observed at 0 dB input SNR, in which mean speech recognition scores varied by 10–12% across WDRC manipulations. Overall, effect sizes were generally small. Across both masker types and the three SNRs tested, the best speech recognition was obtained with 8 channels, regardless of RT. Increased speech levels, which favor audibility, were associated with the short RT and with an increase in the number of channels. These same conditions also increased masker levels by an even greater amount, for a net decrease in the long-term output SNR. Changes in long-term SNR across WDRC conditions were found to be strongly associated with changes in the temporal envelope shape as quantified by the Envelope Difference Index, however, neither of these factors fully explained the observed differences in speech recognition. Conclusions A primary finding of this study was that the number of channels had a modest effect when analyzed at each level of RT, with results suggesting that selecting 8 channels for a given RT might be the safest choice. Effects were smaller for RT, with results suggesting that short RT was slightly better when only 4 channels were used and that long RT was better when 16 channels were used. Individual differences in how listeners were influenced by audibility, output SNR, temporal distortion, and spectral distortion may have contributed to the size of the effects found in this study. Because only general suppositions could made for how each of these factors may have influenced the overall results of this study, future research would benefit from exploring the predictive value of these and other factors in selecting the processing parameters that maximize speech recognition for individuals. PMID:25470368

  16. Activity Recognition in Egocentric video using SVM, kNN and Combined SVMkNN Classifiers

    NASA Astrophysics Data System (ADS)

    Sanal Kumar, K. P.; Bhavani, R., Dr.

    2017-08-01

    Egocentric vision is a unique perspective in computer vision which is human centric. The recognition of egocentric actions is a challenging task which helps in assisting elderly people, disabled patients and so on. In this work, life logging activity videos are taken as input. There are 2 categories, first one is the top level and second one is second level. Here, the recognition is done using the features like Histogram of Oriented Gradients (HOG), Motion Boundary Histogram (MBH) and Trajectory. The features are fused together and it acts as a single feature. The extracted features are reduced using Principal Component Analysis (PCA). The features that are reduced are provided as input to the classifiers like Support Vector Machine (SVM), k nearest neighbor (kNN) and combined Support Vector Machine (SVM) and k Nearest Neighbor (kNN) (combined SVMkNN). These classifiers are evaluated and the combined SVMkNN provided better results than other classifiers in the literature.

  17. Discrimination of geographical origin and detection of adulteration of kudzu root by fluorescence spectroscopy coupled with multi-way pattern recognition

    NASA Astrophysics Data System (ADS)

    Hu, Leqian; Ma, Shuai; Yin, Chunling

    2018-03-01

    In this work, fluorescence spectroscopy combined with multi-way pattern recognition techniques were developed for determining the geographical origin of kudzu root and detection and quantification of adulterants in kudzu root. Excitation-emission (EEM) spectra were obtained for 150 pure kudzu root samples of different geographical origins and 150 fake kudzu roots with different adulteration proportions by recording emission from 330 to 570 nm with excitation in the range of 320-480 nm, respectively. Multi-way principal components analysis (M-PCA) and multilinear partial least squares discriminant analysis (N-PLS-DA) methods were used to decompose the excitation-emission matrices datasets. 150 pure kudzu root samples could be differentiated exactly from each other according to their geographical origins by M-PCA and N-PLS-DA models. For the adulteration kudzu root samples, N-PLS-DA got better and more reliable classification result comparing with the M-PCA model. The results obtained in this study indicated that EEM spectroscopy coupling with multi-way pattern recognition could be used as an easy, rapid and novel tool to distinguish the geographical origin of kudzu root and detect adulterated kudzu root. Besides, this method was also suitable for determining the geographic origin and detection the adulteration of the other foodstuffs which can produce fluorescence.

  18. Process analysis of recycled thermoplasts from consumer electronics by laser-induced plasma spectroscopy.

    PubMed

    Fink, Herbert; Panne, Ulrich; Niessner, Reinhard

    2002-09-01

    An experimental setup for direct elemental analysis of recycled thermoplasts from consumer electronics by laser-induced plasma spectroscopy (LIPS, or laser-induced breakdown spectroscopy, LIBS) was realized. The combination of a echelle spectrograph, featuring a high resolution with a broad spectral coverage, with multivariate methods, such as PLS, PCR, and variable subset selection via a genetic algorithm, resulted in considerable improvements in selectivity and sensitivity for this complex matrix. With a normalization to carbon as internal standard, the limits of detection were in the ppm range. A preliminary pattern recognition study points to the possibility of polymer recognition via the line-rich echelle spectra. Several experiments at an extruder within a recycling plant demonstrated successfully the capability of LIPS for different kinds of routine on-line process analysis.

  19. Harmful Gas Recognition Exploiting a CTL Sensor Array

    PubMed Central

    Wang, Qihui; Xie, Lijun; Zhu, Bo; Zheng, Yao; Cao, Shihua

    2013-01-01

    In this paper, a novel cataluminescence (CTL)-based sensor array consisting of nine types of catalytic materials is developed for the recognition of several harmful gases, namely carbon monoxide, acetone, chloroform and toluene. First, the experimental setup is constructed by using sensing nanomaterials, a heating plate, a pneumatic pump, a gas flow meter, a digital temperature device, a camera and a BPCL Ultra Weak Chemiluminescence Analyzer. Then, unique CTL patterns for the four types of harmful gas are obtained from the sensor array. The harmful gases are successful recognized by the PCA method. The optimal conditions are also investigated. Finally, experimental results show high sensitivity, long-term stability and good linearity of the sensor array, which combined with simplicity, make our system a promising application in this field. PMID:24113681

  20. Study of electromechanical and mechanical properties of bacteria using force microscopy

    NASA Astrophysics Data System (ADS)

    Reukov, Vladimir; Thompson, Gary; Nikiforov, Maxim; Guo, Senli; Ovchinnikov, Oleg; Jesse, Stephen; Kalinin, Sergei; Vertegel, Alexey

    2010-03-01

    The application of scanning probe microscopy (SPM) to biological systems has evolved over the past decade into a multimodal and spectroscopic instrument that provides multiple information channels at each spatial pixel acquired. Recently, functional recognition imaging based on differing electromechanical properties between Gram negative and Gram positive bacteria was achieved using artificial neural network analysis of band excitation piezoresponse force microscopy (BEPFM) data. The immediate goal of this project was to study mechanical and electromechanical properties of bacterial systems physiologically-relevant solutions using Band-width Excitation Piezoresponce Force Microscopy (BE PFM) in combination with Force Mapping. Electromechanical imaging in physiological environments will improve the versatility of functional recognition imaging and open the way for application of the rapid BEPFM line mode method to other living cell systems.

  1. Carbon nuclear magnetic resonance spectroscopic fingerprinting of commercial gasoline: pattern-recognition analyses for screening quality control purposes.

    PubMed

    Flumignan, Danilo Luiz; Boralle, Nivaldo; Oliveira, José Eduardo de

    2010-06-30

    In this work, the combination of carbon nuclear magnetic resonance ((13)C NMR) fingerprinting with pattern-recognition analyses provides an original and alternative approach to screening commercial gasoline quality. Soft Independent Modelling of Class Analogy (SIMCA) was performed on spectroscopic fingerprints to classify representative commercial gasoline samples, which were selected by Hierarchical Cluster Analyses (HCA) over several months in retails services of gas stations, into previously quality-defined classes. Following optimized (13)C NMR-SIMCA algorithm, sensitivity values were obtained in the training set (99.0%), with leave-one-out cross-validation, and external prediction set (92.0%). Governmental laboratories could employ this method as a rapid screening analysis to discourage adulteration practices. Copyright 2010 Elsevier B.V. All rights reserved.

  2. A method of classification for multisource data in remote sensing based on interval-valued probabilities

    NASA Technical Reports Server (NTRS)

    Kim, Hakil; Swain, Philip H.

    1990-01-01

    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.

  3. Recognizing Spoken Words: The Neighborhood Activation Model

    PubMed Central

    Luce, Paul A.; Pisoni, David B.

    2012-01-01

    Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270

  4. Bringing Joy through Dance: Community Outreach with the Council for Professional Recognition and the Trey McIntyre Project

    ERIC Educational Resources Information Center

    Dombrink-Green, Meghan

    2011-01-01

    In May 2011, the Council for Professional Recognition and the contemporary ballet troupe the Trey McIntyre Project (TMP) combined their passion for community outreach by bringing TMP dancers to perform for young children who are hospitalized in the Washington, D.C., area. The Council for Professional Recognition, which administers the Child…

  5. Identity Recognition Algorithm Using Improved Gabor Feature Selection of Gait Energy Image

    NASA Astrophysics Data System (ADS)

    Chao, LIANG; Ling-yao, JIA; Dong-cheng, SHI

    2017-01-01

    This paper describes an effective gait recognition approach based on Gabor features of gait energy image. In this paper, the kernel Fisher analysis combined with kernel matrix is proposed to select dominant features. The nearest neighbor classifier based on whitened cosine distance is used to discriminate different gait patterns. The approach proposed is tested on the CASIA and USF gait databases. The results show that our approach outperforms other state of gait recognition approaches in terms of recognition accuracy and robustness.

  6. Ionic Liquid-Hybrid Molecularly Imprinted Material-Filter Solid-Phase Extraction Coupled with HPLC for Determination of 6-Benzyladenine and 4-Chlorophenoxyacetic Acid in Bean Sprouts.

    PubMed

    Han, Yehong; Yang, Chunliu; Zhou, Yang; Han, Dandan; Yan, Hongyuan

    2017-03-01

    A new method involving ionic liquid-hybrid molecularly imprinted material-filter solid-phase extraction coupled to high-performance liquid chromatography (IL-HIM-FSPE-HPLC) was developed for the simultaneous isolation and determination of 6-benzyladenine (6-BA) and 4-chlorophenoxyacetic acid (4-CPA) in bean sprouts. Sample preconcentration was performed using a modified filter, with the new IL-HIM as the adsorbent, which shows double adsorption. The first adsorption involves special recognition of molecular imprinting, and the second involves ion exchange and electrostatic attraction caused by the ionic liquid. This method combines the advantages of ionic liquids, hybrid materials, and molecularly imprinted polymers and was successfully applied to determine 6-BA and 4-CPA in bean sprouts. The adsorption of 6-BA to IL-HIM is based on selective imprinted recognition, whereas the adsorption of 4-CPA is mainly dependent on ion-exchange interactions.

  7. Cognitive object recognition system (CORS)

    NASA Astrophysics Data System (ADS)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  8. Hybrid generative-discriminative human action recognition by combining spatiotemporal words with supervised topic models

    NASA Astrophysics Data System (ADS)

    Sun, Hao; Wang, Cheng; Wang, Boliang

    2011-02-01

    We present a hybrid generative-discriminative learning method for human action recognition from video sequences. Our model combines a bag-of-words component with supervised latent topic models. A video sequence is represented as a collection of spatiotemporal words by extracting space-time interest points and describing these points using both shape and motion cues. The supervised latent Dirichlet allocation (sLDA) topic model, which employs discriminative learning using labeled data under a generative framework, is introduced to discover the latent topic structure that is most relevant to action categorization. The proposed algorithm retains most of the desirable properties of generative learning while increasing the classification performance though a discriminative setting. It has also been extended to exploit both labeled data and unlabeled data to learn human actions under a unified framework. We test our algorithm on three challenging data sets: the KTH human motion data set, the Weizmann human action data set, and a ballet data set. Our results are either comparable to or significantly better than previously published results on these data sets and reflect the promise of hybrid generative-discriminative learning approaches.

  9. Developing a hybrid dictionary-based bio-entity recognition technique

    PubMed Central

    2015-01-01

    Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907

  10. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  11. Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung

    2010-06-01

    Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.

  12. TCRmodel: high resolution modeling of T cell receptors from sequence.

    PubMed

    Gowthaman, Ragul; Pierce, Brian G

    2018-05-22

    T cell receptors (TCRs), along with antibodies, are responsible for specific antigen recognition in the adaptive immune response, and millions of unique TCRs are estimated to be present in each individual. Understanding the structural basis of TCR targeting has implications in vaccine design, autoimmunity, as well as T cell therapies for cancer. Given advances in deep sequencing leading to immune repertoire-level TCR sequence data, fast and accurate modeling methods are needed to elucidate shared and unique 3D structural features of these molecules which lead to their antigen targeting and cross-reactivity. We developed a new algorithm in the program Rosetta to model TCRs from sequence, and implemented this functionality in a web server, TCRmodel. This web server provides an easy to use interface, and models are generated quickly that users can investigate in the browser and download. Benchmarking of this method using a set of nonredundant recently released TCR crystal structures shows that models are accurate and compare favorably to models from another available modeling method. This server enables the community to obtain insights into TCRs of interest, and can be combined with methods to model and design TCR recognition of antigens. The TCRmodel server is available at: http://tcrmodel.ibbr.umd.edu/.

  13. Cell behavior on gallium nitride surfaces: peptide affinity attachment versus covalent functionalization.

    PubMed

    Foster, Corey M; Collazo, Ramon; Sitar, Zlatko; Ivanisevic, Albena

    2013-07-02

    Gallium nitride is a wide band gap semiconductor that demonstrates a unique set of optical and electrical properties as well as aqueous stability and biocompatibility. This combination of properties makes gallium nitride a strong candidate for use in chemical and biological applications such as sensors and neural interfaces. Molecular modification can be used to enhance the functionality and properties of the gallium nitride surface. Here, gallium nitride surfaces were functionalized with a PC12 cell adhesion promoting peptide using covalent and affinity driven attachment methods. The covalent scheme proceeded by Grignard reaction and olefin metathesis while the affinity driven scheme utilized the recognition peptide isolated through phage display. This study shows that the method of attaching the adhesion peptide influences PC12 cell adhesion and differentiation as measured by cell density and morphological analysis. Covalent attachment promoted monolayer and dispersed cell adhesion while affinity driven attachment promoted multilayer cell agglomeration. Higher cell density was observed on surfaces modified using the recognition peptide. The results suggest that the covalent and affinity driven attachment methods are both suitable for promoting PC12 cell adhesion to the gallium nitride surface, though each method may be preferentially suited for distinct applications.

  14. Picking Deep Filter Responses for Fine-Grained Image Recognition (Open Access Author’s Manuscript)

    DTIC Science & Technology

    2016-12-16

    stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive... filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new...positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher

  15. Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2018-03-01

    The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.

  16. Dynamic and Contextual Information in HMM Modeling for Handwritten Word Recognition.

    PubMed

    Bianne-Bernard, Anne-Laure; Menasri, Farès; Al-Hajj Mohamad, Rami; Mokbel, Chafic; Kermorvant, Christopher; Likforman-Sulem, Laurence

    2011-10-01

    This study aims at building an efficient word recognition system resulting from the combination of three handwriting recognizers. The main component of this combined system is an HMM-based recognizer which considers dynamic and contextual information for a better modeling of writing units. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions, yielding larger clusters, and precise questions, yielding smaller ones. Such clustering enables us to reduce the total number of models and Gaussians densities by 10. We then apply this modeling to the recognition of handwritten words. Experiments are conducted on three publicly available databases based on Latin or Arabic languages: Rimes, IAM, and OpenHart. The results obtained show that contextual information embedded with dynamic modeling significantly improves recognition.

  17. Crowd motion segmentation and behavior recognition fusing streak flow and collectiveness

    NASA Astrophysics Data System (ADS)

    Gao, Mingliang; Jiang, Jun; Shen, Jin; Zou, Guofeng; Fu, Guixia

    2018-04-01

    Crowd motion segmentation and crowd behavior recognition are two hot issues in computer vision. A number of methods have been proposed to tackle these two problems. Among the methods, flow dynamics is utilized to model the crowd motion, with little consideration of collective property. Moreover, the traditional crowd behavior recognition methods treat the local feature and dynamic feature separately and overlook the interconnection of topological and dynamical heterogeneity in complex crowd processes. A crowd motion segmentation method and a crowd behavior recognition method are proposed based on streak flow and crowd collectiveness. The streak flow is adopted to reveal the dynamical property of crowd motion, and the collectiveness is incorporated to reveal the structure property. Experimental results show that the proposed methods improve the crowd motion segmentation accuracy and the crowd recognition rates compared with the state-of-the-art methods.

  18. Design of a Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle

    PubMed Central

    Chen, Long; Li, Qingquan; Li, Ming; Zhang, Liang; Mao, Qingzhou

    2012-01-01

    This paper describes the environment perception system designed for intelligent vehicle SmartV-II, which won the 2010 Future Challenge. This system utilizes the cooperation of multiple lasers and cameras to realize several necessary functions of autonomous navigation: road curb detection, lane detection and traffic sign recognition. Multiple single scan lasers are integrated to detect the road curb based on Z-variance method. Vision based lane detection is realized by two scans method combining with image model. Haar-like feature based method is applied for traffic sign detection and SURF matching method is used for sign classification. The results of experiments validate the effectiveness of the proposed algorithms and the whole system.

  19. Pattern recognition applied to seismic signals of Llaima volcano (Chile): An evaluation of station-dependent classifiers

    NASA Astrophysics Data System (ADS)

    Curilem, Millaray; Huenupan, Fernando; Beltrán, Daniel; San Martin, Cesar; Fuentealba, Gustavo; Franco, Luis; Cardona, Carlos; Acuña, Gonzalo; Chacón, Max; Khan, M. Salman; Becerra Yoma, Nestor

    2016-04-01

    Automatic pattern recognition applied to seismic signals from volcanoes may assist seismic monitoring by reducing the workload of analysts, allowing them to focus on more challenging activities, such as producing reports, implementing models, and understanding volcanic behaviour. In a previous work, we proposed a structure for automatic classification of seismic events in Llaima volcano, one of the most active volcanoes in the Southern Andes, located in the Araucanía Region of Chile. A database of events taken from three monitoring stations on the volcano was used to create a classification structure, independent of which station provided the signal. The database included three types of volcanic events: tremor, long period, and volcano-tectonic and a contrast group which contains other types of seismic signals. In the present work, we maintain the same classification scheme, but we consider separately the stations information in order to assess whether the complementary information provided by different stations improves the performance of the classifier in recognising seismic patterns. This paper proposes two strategies for combining the information from the stations: i) combining the features extracted from the signals from each station and ii) combining the classifiers of each station. In the first case, the features extracted from the signals from each station are combined forming the input for a single classification structure. In the second, a decision stage combines the results of the classifiers for each station to give a unique output. The results confirm that the station-dependent strategies that combine the features and the classifiers from several stations improves the classification performance, and that the combination of the features provides the best performance. The results show an average improvement of 9% in the classification accuracy when compared with the station-independent method.

  20. Frontal affinity chromatography: A unique research tool for biospecific interaction that promotes glycobiology

    PubMed Central

    KASAI, Kenichi

    2014-01-01

    Combination of bioaffinity and chromatography gave birth to affinity chromatography. A further combination with frontal analysis resulted in creation of frontal affinity chromatography (FAC). This new versatile research tool enabled detailed analysis of weak interactions that play essential roles in living systems, especially those between complex saccharides and saccharide-binding proteins. FAC now becomes the best method for the investigation of saccharide-binding proteins (lectins) from viewpoints of sensitivity, accuracy, and efficiency, and is contributing greatly to the development of glycobiology. It opened a door leading to deeper understanding of the significance of saccharide recognition in life. The theory is also concisely described. PMID:25169774

  1. Research on Signature Verification Method Based on Discrete Fréchet Distance

    NASA Astrophysics Data System (ADS)

    Fang, J. L.; Wu, W.

    2018-05-01

    This paper proposes a multi-feature signature template based on discrete Fréchet distance, which breaks through the limitation of traditional signature authentication using a single signature feature. It solves the online handwritten signature authentication signature global feature template extraction calculation workload, signature feature selection unreasonable problem. In this experiment, the false recognition rate (FAR) and false rejection rate (FRR) of the statistical signature are calculated and the average equal error rate (AEER) is calculated. The feasibility of the combined template scheme is verified by comparing the average equal error rate of the combination template and the original template.

  2. Towards increase of diagnostic efficacy in gynecologic OCT

    NASA Astrophysics Data System (ADS)

    Kirillin, Mikhail; Panteleeva, Olga; Eliseeva, Darya; Kachalina, Olga; Sergeeva, Ekaterina; Dubasova, Lyubov; Agrba, Pavel; Mikailova, Gyular; Prudnikov, Maxim; Shakhova, Natalia

    2013-06-01

    Gynecologic applications of optical coherence tomography (OCT) are usually performed in combination with routine diagnostic procedures: laparoscopy and colposcopy. In combination with laparoscopy OCT is employed for inspection of fallopian tubes in cases of unrecognized infertility while in colposcopy it is used to identify cervix pathologies including cancer. In this paper we discuss methods for increasing diagnostic efficacy of OCT application in these procedures. For OCT-laparoscopy we demonstrate independent criteria for pathology recognition which allow to increase accuracy of diagnostics. For OCT-colposcopy we report on application of device for controlled compression allowing to sense the elasticity of the inspected cervix area and distinguish between neoplasia and inflammatory processes.

  3. A real time mobile-based face recognition with fisherface methods

    NASA Astrophysics Data System (ADS)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  4. Method and apparatus for obtaining complete speech signals for speech recognition applications

    NASA Technical Reports Server (NTRS)

    Abrash, Victor (Inventor); Cesari, Federico (Inventor); Franco, Horacio (Inventor); George, Christopher (Inventor); Zheng, Jing (Inventor)

    2009-01-01

    The present invention relates to a method and apparatus for obtaining complete speech signals for speech recognition applications. In one embodiment, the method continuously records an audio stream comprising a sequence of frames to a circular buffer. When a user command to commence or terminate speech recognition is received, the method obtains a number of frames of the audio stream occurring before or after the user command in order to identify an augmented audio signal for speech recognition processing. In further embodiments, the method analyzes the augmented audio signal in order to locate starting and ending speech endpoints that bound at least a portion of speech to be processed for recognition. At least one of the speech endpoints is located using a Hidden Markov Model.

  5. Further evidence that amygdala and hippocampus contribute equally to recognition memory.

    PubMed

    Saunders, R C; Murray, E A; Mishkin, M

    1984-01-01

    The medial temporal neuropathology found in an amnesic neurosurgical patient [17] was simulated in monkeys in an attempt to determine whether the patient's mnemonic disorder, which had been ascribed to bilateral hippocampal destruction, may have also been due in part to unilateral amygdaloid removal. For this purpose, monkeys were prepared with bilateral hippocampectomy combined with unilateral amygdalectomy, and (as a control) bilateral amygdalectomy combined with unilateral hippocampectomy. The animals were trained both before and after surgery on a one-trial visual recognition task requiring memory of single objects for 10 sec each and then given a postoperative performance test in which their one-trial recognition ability was taxed with longer delays (up to 2 min) and longer lists (up to 10 objects). The two groups, which did not differ reliably at any stage, obtained average scores on the performance test 75 and 80%, respectively. Comparison with the results of an earlier experiment [8] indicates that this performance level lies approximately midway between that of monkeys with amygdaloid or hippocampal removals alone (91%) and that of monkeys with combined amygdalo-hippocampal removals (60%). The results point to a direct quantitative relationship between degree of recognition impairment and amount of conjoint damage to the amygdala and hippocampus irrespective of the specific structure involved. Evidence from neurosurgical cases tested in visual recognition [21] indicates that the same conclusion may apply to man.

  6. Automatic anatomy recognition in post-tonsillectomy MR images of obese children with OSAS

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Sin, Sanghun; Arens, Raanan

    2015-03-01

    Automatic Anatomy Recognition (AAR) is a recently developed approach for the automatic whole body wide organ segmentation. We previously tested that methodology on image cases with some pathology where the organs were not distorted significantly. In this paper, we present an advancement of AAR to handle organs which may have been modified or resected by surgical intervention. We focus on MRI of the neck in pediatric Obstructive Sleep Apnea Syndrome (OSAS). The proposed method consists of an AAR step followed by support vector machine techniques to detect the presence/absence of organs. The AAR step employs a hierarchical organization of the organs for model building. For each organ, a fuzzy model over a population is built. The model of the body region is then described in terms of the fuzzy models and a host of other descriptors which include parent to offspring relationship estimated over the population. Organs are recognized following the organ hierarchy by using an optimal threshold based search. The SVM step subsequently checks for evidence of the presence of organs. Experimental results show that AAR techniques can be combined with machine learning strategies within the AAR recognition framework for good performance in recognizing missing organs, in our case missing tonsils in post-tonsillectomy images as well as in simulating tonsillectomy images. The previous recognition performance is maintained achieving an organ localization accuracy of within 1 voxel when the organ is actually not removed. To our knowledge, no methods have been reported to date for handling significantly deformed or missing organs, especially in neck MRI.

  7. The linguistic and interactional factors impacting recognition and dispatch in emergency calls for out-of-hospital cardiac arrest: a mixed-method linguistic analysis study protocol

    PubMed Central

    Riou, Marine; Ball, Stephen; Williams, Teresa A; Whiteside, Austin; O’Halloran, Kay L; Bray, Janet; Perkins, Gavin D; Cameron, Peter; Fatovich, Daniel M; Inoue, Madoka; Bailey, Paul; Brink, Deon; Smith, Karen; Della, Phillip; Finn, Judith

    2017-01-01

    Introduction Emergency telephone calls placed by bystanders are crucial to the recognition of out-of-hospital cardiac arrest (OHCA), fast ambulance dispatch and initiation of early basic life support. Clear and efficient communication between caller and call-taker is essential to this time-critical emergency, yet few studies have investigated the impact that linguistic factors may have on the nature of the interaction and the resulting trajectory of the call. This research aims to provide a better understanding of communication factors impacting on the accuracy and timeliness of ambulance dispatch. Methods and analysis A dataset of OHCA calls and their corresponding metadata will be analysed from an interdisciplinary perspective, combining linguistic analysis and health services research. The calls will be transcribed and coded for linguistic and interactional variables and then used to answer a series of research questions about the recognition of OHCA and the delivery of basic life-support instructions to bystanders. Linguistic analysis of calls will provide a deeper understanding of the interactional dynamics between caller and call-taker which may affect recognition and dispatch for OHCA. Findings from this research will translate into recommendations for modifications of the protocols for ambulance dispatch and provide directions for further research. Ethics and dissemination The study has been approved by the Curtin University Human Research Ethics Committee (HR128/2013) and the St John Ambulance Western Australia Research Advisory Group. Findings will be published in peer-reviewed journals and communicated to key audiences, including ambulance dispatch professionals. PMID:28694349

  8. A Fault Recognition System for Gearboxes of Wind Turbines

    NASA Astrophysics Data System (ADS)

    Yang, Zhiling; Huang, Haiyue; Yin, Zidong

    2017-12-01

    Costs of maintenance and loss of power generation caused by the faults of wind turbines gearboxes are the main components of operation costs for a wind farm. Therefore, the technology of condition monitoring and fault recognition for wind turbines gearboxes is becoming a hot topic. A condition monitoring and fault recognition system (CMFRS) is presented for CBM of wind turbines gearboxes in this paper. The vibration signals from acceleration sensors at different locations of gearbox and the data from supervisory control and data acquisition (SCADA) system are collected to CMFRS. Then the feature extraction and optimization algorithm is applied to these operational data. Furthermore, to recognize the fault of gearboxes, the GSO-LSSVR algorithm is proposed, combining the least squares support vector regression machine (LSSVR) with the Glowworm Swarm Optimization (GSO) algorithm. Finally, the results show that the fault recognition system used in this paper has a high rate for identifying three states of wind turbines’ gears; besides, the combination of date features can affect the identifying rate and the selection optimization algorithm presented in this paper can get a pretty good date feature subset for the fault recognition.

  9. People's Risk Recognition Preceding Evacuation and Its Role in Demand Modeling and Planning.

    PubMed

    Urata, Junji; Pel, Adam J

    2018-05-01

    Evacuation planning and management involves estimating the travel demand in the event that such action is required. This is usually done as a function of people's decision to evacuate, which we show is strongly linked to their risk awareness. We use an empirical data set, which shows tsunami evacuation behavior, to demonstrate that risk recognition is not synonymous with objective risk, but is instead determined by a combination of factors including risk education, information, and sociodemographics, and that it changes dynamically over time. Based on these findings, we formulate an ordered logit model to describe risk recognition combined with a latent class model to describe evacuation choices. Our proposed evacuation choice model along with a risk recognition class can evaluate quantitatively the influence of disaster mitigation measures, risk education, and risk information. The results obtained from the risk recognition model show that risk information has a greater impact in the sense that people recognize their high risk. The results of the evacuation choice model show that people who are unaware of their risk take a longer time to evacuate. © 2017 Society for Risk Analysis.

  10. Non-accidental properties, metric invariance, and encoding by neurons in a model of ventral stream visual object recognition, VisNet.

    PubMed

    Rolls, Edmund T; Mills, W Patrick C

    2018-05-01

    When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli

    PubMed Central

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542

  12. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    PubMed

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  13. Post processing for offline Chinese handwritten character string recognition

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong

    2012-01-01

    Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.

  14. Method and System for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)

    2012-01-01

    A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.

  15. Deep learning based hand gesture recognition in complex scenes

    NASA Astrophysics Data System (ADS)

    Ni, Zihan; Sang, Nong; Tan, Cheng

    2018-03-01

    Recently, region-based convolutional neural networks(R-CNNs) have achieved significant success in the field of object detection, but their accuracy is not too high for small objects and similar objects, such as the gestures. To solve this problem, we present an online hard example testing(OHET) technology to evaluate the confidence of the R-CNNs' outputs, and regard those outputs with low confidence as hard examples. In this paper, we proposed a cascaded networks to recognize the gestures. Firstly, we use the region-based fully convolutional neural network(R-FCN), which is capable of the detection for small object, to detect the gestures, and then use the OHET to select the hard examples. To enhance the accuracy of the gesture recognition, we re-classify the hard examples through VGG-19 classification network to obtain the final output of the gesture recognition system. Through the contrast experiments with other methods, we can see that the cascaded networks combined with the OHET reached to the state-of-the-art results of 99.3% mAP on small and similar gestures in complex scenes.

  16. Improving Speaker Recognition by Biometric Voice Deconstruction

    PubMed Central

    Mazaira-Fernandez, Luis Miguel; Álvarez-Marquina, Agustín; Gómez-Vilda, Pedro

    2015-01-01

    Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions. PMID:26442245

  17. A multifaceted independent performance analysis of facial subspace recognition algorithms.

    PubMed

    Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan

    2013-01-01

    Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)(2)PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.

  18. Recognition Stage for a Speed Supervisor Based on Road Sign Detection

    PubMed Central

    Carrasco, Juan-Pablo; de la Escalera, Arturo; Armingol, José María

    2012-01-01

    Traffic accidents are still one of the main health problems in the World. A number of measures have been applied in order to reduce the number of injuries and fatalities in roads, i.e., implementation of Advanced Driver Assistance Systems (ADAS) based on image processing. In this paper, a real time speed supervisor based on road sign recognition that can work both in urban and non-urban environments is presented. The system is able to recognize 135 road signs, belonging to the danger, yield, prohibition obligation and indication types, and sends warning messages to the driver upon the combination of two pieces of information: the current speed of the car and the road sign symbol. The core of this paper is the comparison between the two main methods which have been traditionally used for detection and recognition of road signs: template matching (TM) and neural networks (NN). The advantages and disadvantages of the two approaches will be shown and commented. Additionally we will show how the use of well-known algorithms to avoid illumination issues reduces the amount of images needed to train a neural network.

  19. Recognizing pedestrian's unsafe behaviors in far-infrared imagery at night

    NASA Astrophysics Data System (ADS)

    Lee, Eun Ju; Ko, Byoung Chul; Nam, Jae-Yeal

    2016-05-01

    Pedestrian behavior recognition is important work for early accident prevention in advanced driver assistance system (ADAS). In particular, because most pedestrian-vehicle crashes are occurred from late of night to early of dawn, our study focus on recognizing unsafe behavior of pedestrians using thermal image captured from moving vehicle at night. For recognizing unsafe behavior, this study uses convolutional neural network (CNN) which shows high quality of recognition performance. However, because traditional CNN requires the very expensive training time and memory, we design the light CNN consisted of two convolutional layers and two subsampling layers for real-time processing of vehicle applications. In addition, we combine light CNN with boosted random forest (Boosted RF) classifier so that the output of CNN is not fully connected with the classifier but randomly connected with Boosted random forest. We named this CNN as randomly connected CNN (RC-CNN). The proposed method was successfully applied to the pedestrian unsafe behavior (PUB) dataset captured from far-infrared camera at night and its behavior recognition accuracy is confirmed to be higher than that of some algorithms related to CNNs, with a shorter processing time.

  20. Improving Speaker Recognition by Biometric Voice Deconstruction.

    PubMed

    Mazaira-Fernandez, Luis Miguel; Álvarez-Marquina, Agustín; Gómez-Vilda, Pedro

    2015-01-01

    Person identification, especially in critical environments, has always been a subject of great interest. However, it has gained a new dimension in a world threatened by a new kind of terrorism that uses social networks (e.g., YouTube) to broadcast its message. In this new scenario, classical identification methods (such as fingerprints or face recognition) have been forcedly replaced by alternative biometric characteristics such as voice, as sometimes this is the only feature available. The present study benefits from the advances achieved during last years in understanding and modeling voice production. The paper hypothesizes that a gender-dependent characterization of speakers combined with the use of a set of features derived from the components, resulting from the deconstruction of the voice into its glottal source and vocal tract estimates, will enhance recognition rates when compared to classical approaches. A general description about the main hypothesis and the methodology followed to extract the gender-dependent extended biometric parameters is given. Experimental validation is carried out both on a highly controlled acoustic condition database, and on a mobile phone network recorded under non-controlled acoustic conditions.

  1. Image ratio features for facial expression recognition application.

    PubMed

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  2. An Effective Palmprint Recognition Approach for Visible and Multispectral Sensor Images

    PubMed Central

    Sammouda, Rachid; Al-Salman, Abdul Malik; Alsanad, Ahmed

    2018-01-01

    Among several palmprint feature extraction methods the HOG-based method is attractive and performs well against changes in illumination and shadowing of palmprint images. However, it still lacks the robustness to extract the palmprint features at different rotation angles. To solve this problem, this paper presents a hybrid feature extraction method, named HOG-SGF that combines the histogram of oriented gradients (HOG) with a steerable Gaussian filter (SGF) to develop an effective palmprint recognition approach. The approach starts by processing all palmprint images by David Zhang’s method to segment only the region of interests. Next, we extracted palmprint features based on the hybrid HOG-SGF feature extraction method. Then, an optimized auto-encoder (AE) was utilized to reduce the dimensionality of the extracted features. Finally, a fast and robust regularized extreme learning machine (RELM) was applied for the classification task. In the evaluation phase of the proposed approach, a number of experiments were conducted on three publicly available palmprint databases, namely MS-PolyU of multispectral palmprint images and CASIA and Tongji of contactless palmprint images. Experimentally, the results reveal that the proposed approach outperforms the existing state-of-the-art approaches even when a small number of training samples are used. PMID:29762519

  3. Overview of BioCreative II gene mention recognition.

    PubMed

    Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John

    2008-01-01

    Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions.

  4. Overview of BioCreative II gene mention recognition

    PubMed Central

    Smith, Larry; Tanabe, Lorraine K; Ando, Rie Johnson nee; Kuo, Cheng-Ju; Chung, I-Fang; Hsu, Chun-Nan; Lin, Yu-Shi; Klinger, Roman; Friedrich, Christoph M; Ganchev, Kuzman; Torii, Manabu; Liu, Hongfang; Haddow, Barry; Struble, Craig A; Povinelli, Richard J; Vlachos, Andreas; Baumgartner, William A; Hunter, Lawrence; Carpenter, Bob; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Liu, Feng; Chen, Yifei; Sun, Chengjie; Katrenko, Sophia; Adriaans, Pieter; Blaschke, Christian; Torres, Rafael; Neves, Mariana; Nakov, Preslav; Divoli, Anna; Maña-López, Manuel; Mata, Jacinto; Wilbur, W John

    2008-01-01

    Nineteen teams presented results for the Gene Mention Task at the BioCreative II Workshop. In this task participants designed systems to identify substrings in sentences corresponding to gene name mentions. A variety of different methods were used and the results varied with a highest achieved F1 score of 0.8721. Here we present brief descriptions of all the methods used and a statistical analysis of the results. We also demonstrate that, by combining the results from all submissions, an F score of 0.9066 is feasible, and furthermore that the best result makes use of the lowest scoring submissions. PMID:18834493

  5. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    PubMed

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  6. Cross-domain expression recognition based on sparse coding and transfer learning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Zhang, Weiyi; Huang, Yong

    2017-05-01

    Traditional facial expression recognition methods usually assume that the training set and the test set are independent and identically distributed. However, in actual expression recognition applications, the conditions of independent and identical distribution are hardly satisfied for the training set and test set because of the difference of light, shade, race and so on. In order to solve this problem and improve the performance of expression recognition in the actual applications, a novel method based on transfer learning and sparse coding is applied to facial expression recognition. First of all, a common primitive model, that is, the dictionary is learnt. Then, based on the idea of transfer learning, the learned primitive pattern is transferred to facial expression and the corresponding feature representation is obtained by sparse coding. The experimental results in CK +, JAFFE and NVIE database shows that the transfer learning based on sparse coding method can effectively improve the expression recognition rate in the cross-domain expression recognition task and is suitable for the practical facial expression recognition applications.

  7. Appearance-based human gesture recognition using multimodal features for human computer interaction

    NASA Astrophysics Data System (ADS)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  8. [Screening specific recognition motif of RNA-binding proteins by SELEX in combination with next-generation sequencing technique].

    PubMed

    Zhang, Lu; Xu, Jinhao; Ma, Jinbiao

    2016-07-25

    RNA-binding protein exerts important biological function by specifically recognizing RNA motif. SELEX (Systematic evolution of ligands by exponential enrichment), an in vitro selection method, can obtain consensus motif with high-affinity and specificity for many target molecules from DNA or RNA libraries. Here, we combined SELEX with next-generation sequencing to study the protein-RNA interaction in vitro. A pool of RNAs with 20 bp random sequences were transcribed by T7 promoter, and target protein was inserted into plasmid containing SBP-tag, which can be captured by streptavidin beads. Through only one cycle, the specific RNA motif can be obtained, which dramatically improved the selection efficiency. Using this method, we found that human hnRNP A1 RRMs domain (UP1 domain) bound RNA motifs containing AGG and AG sequences. The EMSA experiment indicated that hnRNP A1 RRMs could bind the obtained RNA motif. Taken together, this method provides a rapid and effective method to study the RNA binding specificity of proteins.

  9. Vehicle Color Recognition with Vehicle-Color Saliency Detection and Dual-Orientational Dimensionality Reduction of CNN Deep Features

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Li, Jiafeng; Zhuo, Li; Zhang, Hui; Li, Xiaoguang

    2017-12-01

    Color is one of the most stable attributes of vehicles and often used as a valuable cue in some important applications. Various complex environmental factors, such as illumination, weather, noise and etc., result in the visual characteristics of the vehicle color being obvious diversity. Vehicle color recognition in complex environments has been a challenging task. The state-of-the-arts methods roughly take the whole image for color recognition, but many parts of the images such as car windows; wheels and background contain no color information, which will have negative impact on the recognition accuracy. In this paper, a novel vehicle color recognition method using local vehicle-color saliency detection and dual-orientational dimensionality reduction of convolutional neural network (CNN) deep features has been proposed. The novelty of the proposed method includes two parts: (1) a local vehicle-color saliency detection method has been proposed to determine the vehicle color region of the vehicle image and exclude the influence of non-color regions on the recognition accuracy; (2) dual-orientational dimensionality reduction strategy has been designed to greatly reduce the dimensionality of deep features that are learnt from CNN, which will greatly mitigate the storage and computational burden of the subsequent processing, while improving the recognition accuracy. Furthermore, linear support vector machine is adopted as the classifier to train the dimensionality reduced features to obtain the recognition model. The experimental results on public dataset demonstrate that the proposed method can achieve superior recognition performance over the state-of-the-arts methods.

  10. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  11. Carbohydrate recognition by the antiviral lectin cyanovirin-N

    PubMed Central

    Fujimoto, Yukiji K.; Green, David F.

    2012-01-01

    Cyanovirin-N is a cyanobacterial lectin with potent antiviral activity, and has been the focus of extensive pre-clinical investigation as a potential prophylactic for the prevention of the sexual transmission of the human immunodeficiency virus (HIV). Here we present a detailed analysis of carbohydrate recognition by this important protein, using a combination of computational methods, including extensive molecular dynamics simulations and Molecular-Mechanics/ Poisson–Boltzmann/Surface-Area (MM/PBSA) energetic analysis. The simulation results strongly suggest that the observed tendency of wildtype CVN to form domain-swapped dimers is the result of a previously unidentified cis-peptide bond present in the monomeric state. The energetic analysis additionally indicates that the highest-affinity ligand for CVN characterized to date (α-Man-(1,2)-α-Man-(1,2)-α-Man) is recognized asymmetrically by the two binding sites. Finally, we are able to provide a detailed map of the role of all binding site functional groups (both backbone and side chain) to various aspects of molecular recognition: general affinity for cognate ligands, specificity for distinct oligosaccharide targets and the asymmetric recognition of α-Man-(1,2)-α-Man-(1,2)-α-Man. Taken as a whole, these results complement past experimental characterization (both structural and thermodynamic) to provide the most complete understanding of carbohydrate recognition by CVN to date. The results also provide strong support for the application of similar approaches to the understanding of other protein–carbohydrate complexes. PMID:23057413

  12. Visual Recognition Software for Binary Classification and Its Application to Spruce Pollen Identification

    PubMed Central

    Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.

    2016-01-01

    Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017

  13. Resolution of Site-Specific Conformational Heterogeneity in Proline-Rich Molecular Recognition by Src Homology 3 Domains.

    PubMed

    Horness, Rachel E; Basom, Edward J; Mayer, John P; Thielges, Megan C

    2016-02-03

    Conformational heterogeneity and dynamics are increasingly evoked in models of protein molecular recognition but are challenging to experimentally characterize. Here we combine the inherent temporal resolution of infrared (IR) spectroscopy with the spatial resolution afforded by selective incorporation of carbon-deuterium (C-D) bonds, which provide frequency-resolved absorptions within a protein IR spectrum, to characterize the molecular recognition of the Src homology 3 (SH3) domain of the yeast protein Sho1 with its cognate proline-rich (PR) sequence of Pbs2. The IR absorptions of C-D bonds introduced at residues along a peptide of the Pbs2 PR sequence report on the changes in the local environments upon binding to the SH3 domain. Interestingly, upon forming the complex the IR spectra of the peptides labeled with C-D bonds at either of the two conserved prolines of the PXXP consensus recognition sequence show more absorptions than there are C-D bonds, providing evidence for the population of multiple states. In contrast, the NMR spectra of the peptides labeled with (13)C at the same residues show only single resonances, indicating rapid interconversion on the NMR time scale. Thus, the data suggest that the SH3 domain recognizes its cognate peptide with a component of induced fit molecular recognition involving the adoption of multiples states, which have previously gone undetected due to interconversion between the populated states that is too fast to resolve using conventional methods.

  14. Coupling artificial intelligence and numerical computation for engineering design (Invited paper)

    NASA Astrophysics Data System (ADS)

    Tong, S. S.

    1986-01-01

    The possibility of combining artificial intelligence (AI) systems and numerical computation methods for engineering designs is considered. Attention is given to three possible areas of application involving fan design, controlled vortex design of turbine stage blade angles, and preliminary design of turbine cascade profiles. Among the AI techniques discussed are: knowledge-based systems; intelligent search; and pattern recognition systems. The potential cost and performance advantages of an AI-based design-generation system are discussed in detail.

  15. A neural network gravitational arc finder based on the Mediatrix filamentation method

    NASA Astrophysics Data System (ADS)

    Bom, C. R.; Makler, M.; Albuquerque, M. P.; Brandt, C. H.

    2017-01-01

    Context. Automated arc detection methods are needed to scan the ongoing and next-generation wide-field imaging surveys, which are expected to contain thousands of strong lensing systems. Arc finders are also required for a quantitative comparison between predictions and observations of arc abundance. Several algorithms have been proposed to this end, but machine learning methods have remained as a relatively unexplored step in the arc finding process. Aims: In this work we introduce a new arc finder based on pattern recognition, which uses a set of morphological measurements that are derived from the Mediatrix filamentation method as entries to an artificial neural network (ANN). We show a full example of the application of the arc finder, first training and validating the ANN on simulated arcs and then applying the code on four Hubble Space Telescope (HST) images of strong lensing systems. Methods: The simulated arcs use simple prescriptions for the lens and the source, while mimicking HST observational conditions. We also consider a sample of objects from HST images with no arcs in the training of the ANN classification. We use the training and validation process to determine a suitable set of ANN configurations, including the combination of inputs from the Mediatrix method, so as to maximize the completeness while keeping the false positives low. Results: In the simulations the method was able to achieve a completeness of about 90% with respect to the arcs that are input into the ANN after a preselection. However, this completeness drops to 70% on the HST images. The false detections are on the order of 3% of the objects detected in these images. Conclusions: The combination of Mediatrix measurements with an ANN is a promising tool for the pattern-recognition phase of arc finding. More realistic simulations and a larger set of real systems are needed for a better training and assessment of the efficiency of the method.

  16. Research on the feature extraction and pattern recognition of the distributed optical fiber sensing signal

    NASA Astrophysics Data System (ADS)

    Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan

    2014-09-01

    In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.

  17. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  18. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

    PubMed Central

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-01-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787

  19. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    PubMed

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  20. Unsupervised pattern recognition methods in ciders profiling based on GCE voltammetric signals.

    PubMed

    Jakubowska, Małgorzata; Sordoń, Wanda; Ciepiela, Filip

    2016-07-15

    This work presents a complete methodology of distinguishing between different brands of cider and ageing degrees, based on voltammetric signals, utilizing dedicated data preprocessing procedures and unsupervised multivariate analysis. It was demonstrated that voltammograms recorded on glassy carbon electrode in Britton-Robinson buffer at pH 2 are reproducible for each brand. By application of clustering algorithms and principal component analysis visible homogenous clusters were obtained. Advanced signal processing strategy which included automatic baseline correction, interval scaling and continuous wavelet transform with dedicated mother wavelet, was a key step in the correct recognition of the objects. The results show that voltammetry combined with optimized univariate and multivariate data processing is a sufficient tool to distinguish between ciders from various brands and to evaluate their freshness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Weighted Feature Gaussian Kernel SVM for Emotion Recognition

    PubMed Central

    Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods. PMID:27807443

  2. Permutation Entropy and Signal Energy Increase the Accuracy of Neuropathic Change Detection in Needle EMG

    PubMed Central

    2018-01-01

    Background and Objective. Needle electromyography can be used to detect the number of changes and morphological changes in motor unit potentials of patients with axonal neuropathy. General mathematical methods of pattern recognition and signal analysis were applied to recognize neuropathic changes. This study validates the possibility of extending and refining turns-amplitude analysis using permutation entropy and signal energy. Methods. In this study, we examined needle electromyography in 40 neuropathic individuals and 40 controls. The number of turns, amplitude between turns, signal energy, and “permutation entropy” were used as features for support vector machine classification. Results. The obtained results proved the superior classification performance of the combinations of all of the above-mentioned features compared to the combinations of fewer features. The lowest accuracy from the tested combinations of features had peak-ratio analysis. Conclusion. Using the combination of permutation entropy with signal energy, number of turns and mean amplitude in SVM classification can be used to refine the diagnosis of polyneuropathies examined by needle electromyography. PMID:29606959

  3. Voice Recognition in Face-Blind Patients

    PubMed Central

    Liu, Ran R.; Pancaroglu, Raika; Hills, Charlotte S.; Duchaine, Brad; Barton, Jason J. S.

    2016-01-01

    Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia. PMID:25349193

  4. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    NASA Astrophysics Data System (ADS)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  5. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  6. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  7. Motion Imagery Processing and Exploitation (MIPE)

    DTIC Science & Technology

    2013-01-01

    facial recognition —i.e., the identification of a specific person.37 Object detection is often (but not always) considered a prerequisite for instance...The goal of segmentation is to distinguish objects and identify boundaries in images. Some of the earliest approaches to facial recognition involved...methods of instance recognition are at varying levels of maturity. Facial recognition methods are arguably the most mature; the technology is well

  8. Towards discrete wavelet transform-based human activity recognition

    NASA Astrophysics Data System (ADS)

    Khare, Manish; Jeon, Moongu

    2017-06-01

    Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods.

  9. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    PubMed

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  10. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  11. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  12. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  13. The effects of digital signal processing features on children's speech recognition and loudness perception.

    PubMed

    Crukley, Jeffery; Scollie, Susan D

    2014-03-01

    The purpose of this study was to determine the effects of hearing instruments set to Desired Sensation Level version 5 (DSL v5) hearing instrument prescription algorithm targets and equipped with directional microphones and digital noise reduction (DNR) on children's sentence recognition in noise performance and loudness perception in a classroom environment. Ten children (ages 8-17 years) with stable, congenital sensorineural hearing losses participated in the study. Participants were fitted bilaterally with behind-the-ear hearing instruments set to DSL v5 prescriptive targets. Sentence recognition in noise was evaluated using the Bamford-Kowal-Bench Speech in Noise Test (Niquette et al., 2003). Loudness perception was evaluated using a modified version of the Contour Test of Loudness Perception (Cox, Alexander, Taylor, & Gray, 1997). Children's sentence recognition in noise performance was significantly better when using directional microphones alone or in combination with DNR than when using omnidirectional microphones alone or in combination with DNR. Children's loudness ratings for sounds above 72 dB SPL were lowest when fitted with the DSL v5 Noise prescription combined with directional microphones. DNR use showed no effect on loudness ratings. Use of the DSL v5 Noise prescription with a directional microphone improved sentence recognition in noise performance and reduced loudness perception ratings for loud sounds relative to a typical clinical reference fitting with the DSL v5 Quiet prescription with no digital signal processing features enabled. Potential clinical strategies are discussed.

  14. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  15. CNNs flag recognition preprocessing scheme based on gray scale stretching and local binary pattern

    NASA Astrophysics Data System (ADS)

    Gong, Qian; Qu, Zhiyi; Hao, Kun

    2017-07-01

    Flag is a rather special recognition target in image recognition because of its non-rigid features with the location, scale and rotation characteristics. The location change can be handled well by the depth learning algorithm Convolutional Neural Networks (CNNs), but the scale and rotation changes are quite a challenge for CNNs. Since it has good rotation and gray scale invariance, the local binary pattern (LBP) is combined with grayscale stretching and CNNs to make LBP and grayscale stretching as CNNs pretreatment, which can not only significantly improve the efficiency of flag recognition, but can also evaluate the recognition effect through ROC, accuracy, MSE and quality factor.

  16. Enhancing Adsorption Capacity while Maintaining Specific Recognition Performance of Mesoporous Silica: A Novel Imprinting Strategy with Amphiphilic Ionic Liquid as Surfactant.

    PubMed

    Ding, Shichao; Li, Zhiling; Cheng, Yuan; Du, Chunbao; Gao, Junfeng; Zhang, Yong-Wei; Zhang, Nan; Li, Zhaotong; Chang, Ninghui; Hu, Xiaoling

    2018-06-21

    In order to facilitate the broad applications of molecular recognition materials in biomedical areas, it is critical to enhance their adsorption capacity while maintaining their excellent recognition performance. In this work, we designed and synthesized well-defined peptide-imprinted mesoporous silica (PIMS) for specific recognition of an immunostimulating hexapeptide from human casein (IHHC) by using amphiphilic ionic liquid as the surfactant to anchor IHHC via a combination of one step sol-gel method and docking oriented imprinting approach. Thereinto, theoretical calculation was employed to reveal the multiple binding interactions and dual-template configuration between amphiphilic ionic liquid and IHHC. The fabricated PIMS was characterized and an in-depth analysis of specific recognition mechanism was conducted. Results revealed that both adsorption and recognition capabilities of PIMS far exceeded that of the NIMS's. More significantly, the PIMS exhibited a superior binding capacity (60.5 mg g-1), which could increase 18.9% than the previous work. The corresponding imprinting factor and selectivity coefficient could reach up to 4.51 and 3.30, respectively. The PIMS also possessed lickety-split kinetic binding for IHHC, which the equilibrium time was only 10 min. All of these merits were due to the high surface area and the synergistic effect of multiple interactions (including hydrogen bonding, π-π stacking, ion-ion electrostatic interactions and van der Waals interactions, etc.) between PIMS and IHHC in imprinted sites. The present work suggests the potential application of PIMS for large-scale and high-effective separation of IHHC, which may lead to their broad applications in drug/gene deliver, biosensors, catalyst and so on. © 2018 IOP Publishing Ltd.

  17. Constructing a safety and security system by medical applications of a fast face recognition optical parallel correlator

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Murakami, Yasuo; Kodate, Kashiko

    2006-01-01

    Medical errors and patient safety have always received a great deal of attention, as they can be critically life-threatening and significant matters. Hospitals and medical personnel are trying their utmost to avoid these errors. Currently in the medical field, patients' record is identified through their PIN numbers and ID cards. However, for patients who cannot speak or move, or who suffer from memory disturbances, alternative methods would be more desirable, and necessary in some cases. The authors previously proposed and fabricated a specially-designed correlator called FARCO (Fast Face Recognition Optical Correlator) based on the Vanderlugt Correlator1, which operates at the speed of 1000 faces/s 2,3,4. Combined with high-speed display devices, the four-channel processing could achieve such high operational speed as 4000 faces/s. Running trial experiments on a 1-to-N identification basis using the optical parallel correlator, we succeeded in acquiring low error rates of 1 % FMR and 2.3 % FNMR. In this paper, we propose a robust face recognition system using the FARCO for focusing on the safety and security of the medical field. We apply our face recognition system to registration of inpatients, in particular children and infants, before and after medical treatments or operations. The proposed system has recorded a higher recognition rate by multiplexing both input and database facial images from moving images. The system was also tested and evaluated for further practical use, leaving excellent results. Hence, our face recognition system could function effectively as an integral part of medical system, meeting these essential requirements of safety, security and privacy.

  18. Sparse and redundant representations for inverse problems and recognition

    NASA Astrophysics Data System (ADS)

    Patel, Vishal M.

    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed transmitted and/or received electromagnetic waveforms. We demonstrate that this new imaging scheme, requires no new hardware components and allows the aperture to be compressed. Also, it presents many new applications and advantages which include strong resistance to countermesasures and interception, imaging much wider swaths and reduced on-board storage requirements. The last part of the dissertation deals with object recognition based on learning dictionaries for simultaneous sparse signal approximations and feature extraction. A dictionary is learned for each object class based on given training examples which minimize the representation error with a sparseness constraint. A novel test image is then projected onto the span of the atoms in each learned dictionary. The residual vectors along with the coefficients are then used for recognition. Applications to illumination robust face recognition and automatic target recognition are presented.

  19. 3D automatic anatomy segmentation based on iterative graph-cut-ASM.

    PubMed

    Chen, Xinjian; Bagci, Ulas

    2011-08-01

    This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al. [Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 degrees and 0.03, and over all foot bones are about 3.5709 mm, 0.35 degrees and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and all foot bones for all subjects are 93.75% and 0.28%, respectively. While the delineations for the four organs can be accomplished quite rapidly with average of 78 s, the delineations for the five foot bones can be accomplished with average of 70 s. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy segmentation system: (a) the incorporation of shape priors into the GC framework is feasible in 3D as demonstrated previously for 2D images; (b) our results in 3D confirm the accuracy behavior observed in 2D. The hybrid strategy IGCASM seems to be more robust and accurate than ASM and GC individually; and (c) delineations within body regions and foot bones of clinical importance can be accomplished quite rapidly within 1.5 min.

  20. Integrated structural biology to unravel molecular mechanisms of protein-RNA recognition.

    PubMed

    Schlundt, Andreas; Tants, Jan-Niklas; Sattler, Michael

    2017-04-15

    Recent advances in RNA sequencing technologies have greatly expanded our knowledge of the RNA landscape in cells, often with spatiotemporal resolution. These techniques identified many new (often non-coding) RNA molecules. Large-scale studies have also discovered novel RNA binding proteins (RBPs), which exhibit single or multiple RNA binding domains (RBDs) for recognition of specific sequence or structured motifs in RNA. Starting from these large-scale approaches it is crucial to unravel the molecular principles of protein-RNA recognition in ribonucleoprotein complexes (RNPs) to understand the underlying mechanisms of gene regulation. Structural biology and biophysical studies at highest possible resolution are key to elucidate molecular mechanisms of RNA recognition by RBPs and how conformational dynamics, weak interactions and cooperative binding contribute to the formation of specific, context-dependent RNPs. While large compact RNPs can be well studied by X-ray crystallography and cryo-EM, analysis of dynamics and weak interaction necessitates the use of solution methods to capture these properties. Here, we illustrate methods to study the structure and conformational dynamics of protein-RNA complexes in solution starting from the identification of interaction partners in a given RNP. Biophysical and biochemical techniques support the characterization of a protein-RNA complex and identify regions relevant in structural analysis. Nuclear magnetic resonance (NMR) is a powerful tool to gain information on folding, stability and dynamics of RNAs and characterize RNPs in solution. It provides crucial information that is complementary to the static pictures derived from other techniques. NMR can be readily combined with other solution techniques, such as small angle X-ray and/or neutron scattering (SAXS/SANS), electron paramagnetic resonance (EPR), and Förster resonance energy transfer (FRET), which provide information about overall shapes, internal domain arrangements and dynamics. Principles of protein-RNA recognition and current approaches are reviewed and illustrated with recent studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Exploring 3D Human Action Recognition: from Offline to Online.

    PubMed

    Liu, Zhenyu; Li, Rui; Tan, Jianrong

    2018-02-20

    With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems-including real-time performance and sequence segmentation-are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset.

  2. Exploring 3D Human Action Recognition: from Offline to Online

    PubMed Central

    Li, Rui; Liu, Zhenyu; Tan, Jianrong

    2018-01-01

    With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems—including real-time performance and sequence segmentation—are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset. PMID:29461502

  3. View-invariant gait recognition method by three-dimensional convolutional neural network

    NASA Astrophysics Data System (ADS)

    Xing, Weiwei; Li, Ying; Zhang, Shunli

    2018-01-01

    Gait as an important biometric feature can identify a human at a long distance. View change is one of the most challenging factors for gait recognition. To address the cross view issues in gait recognition, we propose a view-invariant gait recognition method by three-dimensional (3-D) convolutional neural network. First, 3-D convolutional neural network (3DCNN) is introduced to learn view-invariant feature, which can capture the spatial information and temporal information simultaneously on normalized silhouette sequences. Second, a network training method based on cross-domain transfer learning is proposed to solve the problem of the limited gait training samples. We choose the C3D as the basic model, which is pretrained on the Sports-1M and then fine-tune C3D model to adapt gait recognition. In the recognition stage, we use the fine-tuned model to extract gait features and use Euclidean distance to measure the similarity of gait sequences. Sufficient experiments are carried out on the CASIA-B dataset and the experimental results demonstrate that our method outperforms many other methods.

  4. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  5. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  6. Deep feature extraction and combination for synthetic aperture radar target classification

    NASA Astrophysics Data System (ADS)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  7. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  8. Fine-grained recognition of plants from images.

    PubMed

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.

  9. Physical Principles of the Method for Determination of Geometrical Characteristics and Particle Recognition in Digital Holography

    NASA Astrophysics Data System (ADS)

    Dyomin, V. V.; Polovtsev, I. G.; Davydova, A. Yu.

    2018-03-01

    The physical principles of a method for determination of geometrical characteristics of particles and particle recognition based on the concepts of digital holography, followed by processing of the particle images reconstructed from the digital hologram, using the morphological parameter are reported. An example of application of this method for fast plankton particle recognition is given.

  10. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  11. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  12. Recognition of complex human behaviours using 3D imaging for intelligent surveillance applications

    NASA Astrophysics Data System (ADS)

    Yao, Bo; Lepley, Jason J.; Peall, Robert; Butler, Michael; Hagras, Hani

    2016-10-01

    We introduce a system that exploits 3-D imaging technology as an enabler for the robust recognition of the human form. We combine this with pose and feature recognition capabilities from which we can recognise high-level human behaviours. We propose a hierarchical methodology for the recognition of complex human behaviours, based on the identification of a set of atomic behaviours, individual and sequential poses (e.g. standing, sitting, walking, drinking and eating) that provides a framework from which we adopt time-based machine learning techniques to recognise complex behaviour patterns.

  13. Real-time traffic sign recognition based on a general purpose GPU and deep-learning.

    PubMed

    Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).

  14. Multiscale moment-based technique for object matching and recognition

    NASA Astrophysics Data System (ADS)

    Thio, HweeLi; Chen, Liya; Teoh, Eam-Khwang

    2000-03-01

    A new method is proposed to extract features from an object for matching and recognition. The features proposed are a combination of local and global characteristics -- local characteristics from the 1-D signature function that is defined to each pixel on the object boundary, global characteristics from the moments that are generated from the signature function. The boundary of the object is first extracted, then the signature function is generated by computing the angle between two lines from every point on the boundary as a function of position along the boundary. This signature function is position, scale and rotation invariant (PSRI). The shape of the signature function is then described quantitatively by using moments. The moments of the signature function are the global characters of a local feature set. Using moments as the eventual features instead of the signature function reduces the time and complexity of an object matching application. Multiscale moments are implemented to produce several sets of moments that will generate more accurate matching. Basically multiscale technique is a coarse to fine procedure and makes the proposed method more robust to noise. This method is proposed to match and recognize objects under simple transformation, such as translation, scale changes, rotation and skewing. A simple logo indexing system is implemented to illustrate the performance of the proposed method.

  15. Visual cluster analysis and pattern recognition methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    2001-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  16. Effectiveness of feature and classifier algorithms in character recognition systems

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.

    1993-04-01

    At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.

  17. Memantine and recognition memory: possible facilitation of its behavioral effects by the nitric oxide (NO) donor molsidomine.

    PubMed

    Pitsikas, Nikolaos; Sakellaridis, Nikolaos

    2007-10-01

    The effects of the non-competitive N-methyl-D-aspartate (NMDA) receptor antagonist memantine on recognition memory were investigated in the rat by using the object recognition task. In addition, a possible interaction between memantine and the nitric oxide (NO) donor molsidomine in antagonizing extinction of recognition memory was also evaluated utilizing the same behavioral procedure. In a first dose-response study, post-training administration of memantine (10 and 20, but not 3 mg/kg) antagonized recognition memory deficits in the rat, suggesting that memantine modulates storage and/or retrieval of information. In a subsequent study, combination of sub-threshold doses of memantine (3 mg/kg) and the NO donor molsidomine (1 mg/kg) counteracted delay-dependent impairments in the same task. Neither memantine (3 mg/kg) nor molsidomine (1 mg/kg) alone reduced object recognition performance deficits. The present findings indicate a) that memantine is involved in recognition memory and b) support a functional interaction between memantine and molsidomine on recognition memory mechanisms.

  18. Method of synthesized phase objects for pattern recognition with rotation invariance

    NASA Astrophysics Data System (ADS)

    Ostroukh, Alexander P.; Butok, Alexander M.; Shvets, Rostislav A.; Yezhov, Pavel V.; Kim, Jin-Tae; Kuzmenko, Alexander V.

    2015-11-01

    We present a development of the method of synthesized phase objects (SPO-method) [1] for the rotation-invariant pattern recognition. For the standard method of recognition and the SPO-method, the comparison of the parameters of correlation signals for a number of amplitude objects is executed at the realization of a rotation in an optical-digital correlator with the joint Fourier transformation. It is shown that not only the invariance relative to a rotation at a realization of the joint correlation for synthesized phase objects (SP-objects) but also the main advantage of the method of SP-objects over the reference one such as the unified δ-like recognition signal with the largest possible signal-to-noise ratio independent of the type of an object are attained.

  19. Analysis and Recognition of Traditional Chinese Medicine Pulse Based on the Hilbert-Huang Transform and Random Forest in Patients with Coronary Heart Disease

    PubMed Central

    Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie

    2015-01-01

    Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation. PMID:26180536

  20. Analysis and Recognition of Traditional Chinese Medicine Pulse Based on the Hilbert-Huang Transform and Random Forest in Patients with Coronary Heart Disease.

    PubMed

    Guo, Rui; Wang, Yiqin; Yan, Hanxia; Yan, Jianjun; Yuan, Fengyin; Xu, Zhaoxia; Liu, Guoping; Xu, Wenjie

    2015-01-01

    Objective. This research provides objective and quantitative parameters of the traditional Chinese medicine (TCM) pulse conditions for distinguishing between patients with the coronary heart disease (CHD) and normal people by using the proposed classification approach based on Hilbert-Huang transform (HHT) and random forest. Methods. The energy and the sample entropy features were extracted by applying the HHT to TCM pulse by treating these pulse signals as time series. By using the random forest classifier, the extracted two types of features and their combination were, respectively, used as input data to establish classification model. Results. Statistical results showed that there were significant differences in the pulse energy and sample entropy between the CHD group and the normal group. Moreover, the energy features, sample entropy features, and their combination were inputted as pulse feature vectors; the corresponding average recognition rates were 84%, 76.35%, and 90.21%, respectively. Conclusion. The proposed approach could be appropriately used to analyze pulses of patients with CHD, which can lay a foundation for research on objective and quantitative criteria on disease diagnosis or Zheng differentiation.

  1. The image recognition based on neural network and Bayesian decision

    NASA Astrophysics Data System (ADS)

    Wang, Chugege

    2018-04-01

    The artificial neural network began in 1940, which is an important part of artificial intelligence. At present, it has become a hot topic in the fields of neuroscience, computer science, brain science, mathematics, and psychology. Thomas Bayes firstly reported the Bayesian theory in 1763. After the development in the twentieth century, it has been widespread in all areas of statistics. In recent years, due to the solution of the problem of high-dimensional integral calculation, Bayesian Statistics has been improved theoretically, which solved many problems that cannot be solved by classical statistics and is also applied to the interdisciplinary fields. In this paper, the related concepts and principles of the artificial neural network are introduced. It also summarizes the basic content and principle of Bayesian Statistics, and combines the artificial neural network technology and Bayesian decision theory and implement them in all aspects of image recognition, such as enhanced face detection method based on neural network and Bayesian decision, as well as the image classification based on the Bayesian decision. It can be seen that the combination of artificial intelligence and statistical algorithms has always been the hot research topic.

  2. Analysis of chemical signals in red fire ants by gas chromatography and pattern recognition techniques

    USDA-ARS?s Scientific Manuscript database

    The combination of gas chromatography and pattern recognition (GC/PR) analysis is a powerful tool for investigating complicated biological problems. Clustering, mapping, discriminant development, etc. are necessary to analyze realistically large chromatographic data sets and to seek meaningful relat...

  3. Development of Collaborative Research Initiatives to Advance the Aerospace Sciences-via the Communications, Electronics, Information Systems Focus Group

    NASA Technical Reports Server (NTRS)

    Knasel, T. Michael

    1996-01-01

    The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.

  4. Personal recognition using hand shape and texture.

    PubMed

    Kumar, Ajay; Zhang, David

    2006-08-01

    This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), k-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.

  5. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  6. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  7. Artificial receptor-functionalized nanoshell: facile preparation, fast separation and specific protein recognition

    NASA Astrophysics Data System (ADS)

    Ouyang, Ruizhuo; Lei, Jianping; Ju, Huangxian

    2010-05-01

    This work combined molecular imprinting technology with superparamagnetic nanospheres as the core to prepare artificial receptor-functionalized magnetic nanoparticles for separation of homologous proteins. Using dopamine as a functional monomer, novel surface protein-imprinted superparamagnetic polydopamine (PDA) core-shell nanoparticles were successfully prepared in physiological conditions, which could maintain the natural structure of a protein template and achieved the development of molecularly imprinted polymers (MIPs) from one dimension to zero dimension for efficient recognition towards large biomolecules. The resultant nanoparticles could be used for convenient magnetic separation of homologous proteins with high specificity. The nanoparticles possessed good monodispersibility, uniform surface morphology and high saturation magnetization value. The bound amounts of template proteins measured by both indirect and direct methods were in good agreement. The maximum number of imprinted cavities on the surface of the bovine hemoglobin (Hb)-imprinted nanoshell was 2.21 × 1018 g - 1, which well matched their maximum binding capacity toward bovine Hb. Both the simple method for preparation of MIPs and the magnetic nanospheres showed good application potential in fast separation, effective concentration and selective biosensing of large protein molecules.

  8. [Study on brand traceability of vinegar based on near infrared spectroscopy technology].

    PubMed

    Guan, Xiao; Liu, Jing; Gu, Fang-Qing; Yang, Yong-Jian

    2014-09-01

    In the present paper, 152 vinegar samples with four different brands were chosen as research targets, and their near infrared spectra were collected by diffusion reflection mode and transmission mode, respectively. Furthermore, the brand traceability models for edible vinegar were constructed. The effects of the collection mode and pretreatment methods of spectrum on the precision of traceability models were investigated intensively. The models constructed by PLS1-DA modeling method using spectrum data of 114 training samples were applied to predict 38 test samples, and R2, RMSEC and RMSEP of the model based on transmission mode data were 0.92, 0.113 and 0.127, respectively, with recognition rate of 76.32%, and those based on diffusion reflection mode data were 0.97, 0.102 and 0.119, with recognition rate of 86.84%. The results demonstrated that the near infrared spectrum combined with PLS1-DA can be used to establish the brand traceability models for edible vinegar, and diffuse reflection mode is more beneficial for predictive ability of the model.

  9. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  10. The Ilac-Project Supporting Ancient Coin Classification by Means of Image Analysis

    NASA Astrophysics Data System (ADS)

    Kavelar, A.; Zambanini, S.; Kampel, M.; Vondrovec, K.; Siegl, K.

    2013-07-01

    This paper presents the ILAC project, which aims at the development of an automated image-based classification system for ancient Roman Republican coins. The benefits of such a system are manifold: operating at the suture between computer vision and numismatics, ILAC can reduce the day-to-day workload of numismatists by assisting them in classification tasks and providing a preselection of suitable coin classes. This is especially helpful for large coin hoard findings comprising several thousands of coins. Furthermore, this system could be implemented in an online platform for hobby numismatists, allowing them to access background information about their coin collection by simply uploading a photo of obverse and reverse for the coin of interest. ILAC explores different computer vision techniques and their combinations for the use of image-based coin recognition. Some of these methods, such as image matching, use the entire coin image in the classification process, while symbol or legend recognition exploit certain characteristics of the coin imagery. An overview of the methods explored so far and the respective experiments is given as well as an outlook on the next steps of the project.

  11. Background feature descriptor for offline handwritten numeral recognition

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Wang, Hao; Tian, Tian; Jie, Feiran; Lei, Bo

    2011-11-01

    This paper puts forward an offline handwritten numeral recognition method based on background structural descriptor (sixteen-value numerical background expression). Through encoding the background pixels in the image according to a certain rule, 16 different eigenvalues were generated, which reflected the background condition of every digit, then reflected the structural features of the digits. Through pattern language description of images by these features, automatic segmentation of overlapping digits and numeral recognition can be realized. This method is characterized by great deformation resistant ability, high recognition speed and easy realization. Finally, the experimental results and conclusions are presented. The experimental results of recognizing datasets from various practical application fields reflect that with this method, a good recognition effect can be achieved.

  12. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  13. Offline Arabic handwriting recognition: a survey.

    PubMed

    Lorigo, Liana M; Govindaraju, Venu

    2006-05-01

    The automatic recognition of text on scanned images has enabled many applications such as searching for words in large volumes of documents, automatic sorting of postal mail, and convenient editing of previously printed documents. The domain of handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different methods have been proposed and applied to various types of images. This paper provides a comprehensive review of these methods. It is the first survey to focus on Arabic handwriting recognition and the first Arabic character recognition survey to provide recognition rates and descriptions of test data for the approaches discussed. It includes background on the field, discussion of the methods, and future research directions.

  14. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  15. Exploiting Hidden Layer Responses of Deep Neural Networks for Language Recognition

    DTIC Science & Technology

    2016-09-08

    trained DNNs. We evaluated this ap- proach in NIST 2015 language recognition evaluation. The per- formances achieved by the proposed approach are very...activations, used in direct DNN-LID. Results from the LID experiments support our hypothesis. The LID experiments are performed on NIST Language Recognition...of-the-art I- vector system [3, 10, 11] in evaluation (eval) set of NIST LRE 2015. Combination of proposed technique and state-of-the-art I-vector

  16. A Modified Sparse Representation Method for Facial Expression Recognition.

    PubMed

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  17. A Modified Sparse Representation Method for Facial Expression Recognition

    PubMed Central

    Wang, Wei; Xu, LiHong

    2016-01-01

    In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878

  18. Visual cluster analysis and pattern recognition template and methods

    DOEpatents

    Osbourn, Gordon Cecil; Martinez, Rubel Francisco

    1999-01-01

    A method of clustering using a novel template to define a region of influence. Using neighboring approximation methods, computation times can be significantly reduced. The template and method are applicable and improve pattern recognition techniques.

  19. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xinjian; Bagci, Ulas; Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Building 10 Room 1C515, Bethesda, Maryland 20892-1182

    2011-08-15

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability ofmore » the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al.[Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 deg. and 0.03, and over all foot bones are about 3.5709 mm, 0.35 deg. and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and all foot bones for all subjects are 93.75% and 0.28%, respectively. While the delineations for the four organs can be accomplished quite rapidly with average of 78 s, the delineations for the five foot bones can be accomplished with average of 70 s. Conclusions: The experimental results showed the feasibility and efficacy of the proposed automatic anatomy segmentation system: (a) the incorporation of shape priors into the GC framework is feasible in 3D as demonstrated previously for 2D images; (b) our results in 3D confirm the accuracy behavior observed in 2D. The hybrid strategy IGCASM seems to be more robust and accurate than ASM and GC individually; and (c) delineations within body regions and foot bones of clinical importance can be accomplished quite rapidly within 1.5 min.« less

  20. Research for Key Techniques of Geophysical Recognition System of Hydrocarbon-induced Magnetic Anomalies Based on Hydrocarbon Seepage Theory

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Hao, T.; Zhao, B.

    2009-12-01

    Hydrocarbon seepage effects can cause magnetic alteration zones in near surface, and the magnetic anomalies induced by the alteration zones can thus be used to locate oil-gas potential regions. In order to reduce the inaccuracy and multi-resolution of the hydrocarbon anomalies recognized only by magnetic data, and to meet the requirement of integrated management and sythetic analysis of multi-source geoscientfic data, it is necessary to construct a recognition system that integrates the functions of data management, real-time processing, synthetic evaluation, and geologic mapping. In this paper research for the key techniques of the system is discussed. Image processing methods can be applied to potential field images so as to make it easier for visual interpretation and geological understanding. For gravity or magnetic images, the anomalies with identical frequency-domain characteristics but different spatial distribution will reflect differently in texture and relevant textural statistics. Texture is a description of structural arrangements and spatial variation of a dataset or an image, and has been applied in many research fields. Textural analysis is a procedure that extracts textural features by image processing methods and thus obtains a quantitative or qualitative description of texture. When the two kinds of anomalies have no distinct difference in amplitude or overlap in frequency spectrum, they may be distinguishable due to their texture, which can be considered as textural contrast. Therefore, for the recognition system we propose a new “magnetic spots” recognition method based on image processing techniques. The method can be divided into 3 major steps: firstly, separate local anomalies caused by shallow, relatively small sources from the total magnetic field, and then pre-process the local magnetic anomaly data by image processing methods such that magnetic anomalies can be expressed as points, lines and polygons with spatial correlation, which includes histogram-equalization based image display, object recognition and extraction; then, mine the spatial characteristics and correlations of the magnetic anomalies using textural statistics and analysis, and study the features of known anomalous objects (closures, hydrocarbon-bearing structures, igneous rocks, etc.) in the same research area; finally, classify the anomalies, cluster them according to their similarity, and predict hydrocarbon induced “magnetic spots” combined with geologic, drilling and rock core data. The system uses the ArcGIS as the secondary development platform, inherits the basic functions of the ArcGIS, and develops two main sepecial functional modules, the module for conventional potential-field data processing methods and the module for feature extraction and enhancement based on image processing and analysis techniques. The system can be applied to realize the geophysical detection and recognition of near-surface hydrocarbon seepage anomalies, provide technical support for locating oil-gas potential regions, and promote geophysical data processing and interpretation to advance more efficiently.

  1. 3D automatic anatomy segmentation based on iterative graph-cut-ASM

    PubMed Central

    Chen, Xinjian; Bagci, Ulas

    2011-01-01

    Purpose: This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images.Methods: The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al. [Proc. SPIE, 7259, 72590C1–72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine.Results: The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10° and 0.03, and over all foot bones are about 3.5709 mm, 0.35° and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and all foot bones for all subjects are 93.75% and 0.28%, respectively. While the delineations for the four organs can be accomplished quite rapidly with average of 78 s, the delineations for the five foot bones can be accomplished with average of 70 s.Conclusions: The experimental results showed the feasibility and efficacy of the proposed automatic anatomy segmentation system: (a) the incorporation of shape priors into the GC framework is feasible in 3D as demonstrated previously for 2D images; (b) our results in 3D confirm the accuracy behavior observed in 2D. The hybrid strategy IGCASM seems to be more robust and accurate than ASM and GC individually; and (c) delineations within body regions and foot bones of clinical importance can be accomplished quite rapidly within 1.5 min. PMID:21928634

  2. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  3. Specific and Modular Binding Code for Cytosine Recognition in Pumilio/FBF (PUF) RNA-binding Domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Shuyun; Wang, Yang; Cassidy-Amstutz, Caleb

    2011-10-28

    Pumilio/fem-3 mRNA-binding factor (PUF) proteins possess a recognition code for bases A, U, and G, allowing designed RNA sequence specificity of their modular Pumilio (PUM) repeats. However, recognition side chains in a PUM repeat for cytosine are unknown. Here we report identification of a cytosine-recognition code by screening random amino acid combinations at conserved RNA recognition positions using a yeast three-hybrid system. This C-recognition code is specific and modular as specificity can be transferred to different positions in the RNA recognition sequence. A crystal structure of a modified PUF domain reveals specific contacts between an arginine side chain and themore » cytosine base. We applied the C-recognition code to design PUF domains that recognize targets with multiple cytosines and to generate engineered splicing factors that modulate alternative splicing. Finally, we identified a divergent yeast PUF protein, Nop9p, that may recognize natural target RNAs with cytosine. This work deepens our understanding of natural PUF protein target recognition and expands the ability to engineer PUF domains to recognize any RNA sequence.« less

  4. Reasoning Activity for Smart Homes Using a Lattice-Based Evidential Structure

    NASA Astrophysics Data System (ADS)

    Liao, Jing; Bi, Yaxin; Nugent, Chris

    This paper explores a revised evidential lattice structure designed for the purposes of activity recognition within Smart Homes. The proposed structure consists of three layers, an object layer, a context layer and an activity layer. These layers can be used to combine the mass functions derived from sensors along with sensor context and can subsequently be used to infer activities. We present the details of configuring the activity recognition process and perform an analysis on the relationship between the number of sensors and the number of layers. We also present the details of an empirical study on two public data sets. The results from this work has demonstrated that the proposed method is capable of correctly detecting activities with a high degree of accuracy (84.27%) with a dataset from MIT [4] and 82.49% with a dataset from the University of Amsterdam[10].

  5. Protein-Templated Fragment Ligations-From Molecular Recognition to Drug Discovery.

    PubMed

    Jaegle, Mike; Wong, Ee Lin; Tauber, Carolin; Nawrotzky, Eric; Arkona, Christoph; Rademann, Jörg

    2017-06-19

    Protein-templated fragment ligation is a novel concept to support drug discovery and can help to improve the efficacy of protein ligands. Protein-templated fragment ligations are chemical reactions between small molecules ("fragments") utilizing a protein's surface as a reaction vessel to catalyze the formation of a protein ligand with increased binding affinity. The approach exploits the molecular recognition of reactive small-molecule fragments by proteins both for ligand assembly and for the identification of bioactive fragment combinations. In this way, chemical synthesis and bioassay are integrated in one single step. This Review discusses the biophysical basis of reversible and irreversible fragment ligations and gives an overview of the available methods to detect protein-templated ligation products. The chemical scope and recent applications as well as future potential of the concept in drug discovery are reviewed. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A study of fuzzy logic ensemble system performance on face recognition problem

    NASA Astrophysics Data System (ADS)

    Polyakova, A.; Lipinskiy, L.

    2017-02-01

    Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.

  7. Readiness for Meaningful Use of Health Information Technology and Patient Centered Medical Home Recognition Survey Results

    PubMed Central

    Shin, Peter; Sharac, Jessica

    2013-01-01

    Objective Determine the factors that impact HIT use and MU readiness for community health centers (CHCs). Background The HITECH Act allocates funds to Medicaid and Medicare providers to encourage the adoption of electronic health records (EHR), in an effort to improve health care quality and patient outcomes, and to reduce health care costs. Methods We surveyed CHCs on their Readiness for Meaningful Use (MU) of Health Information Technology (HIT) and Patient Centered Medical Home (PCMH) Recognition, then we combined responses with 2009 Uniform Data System data to determine which factors impact use of HIT and MU readiness. Results Nearly 70% of CHCs had full or partial EHR adoption at the time of survey. Results are presented for centers with EHR adoption, by the length of time that their EHR systems have been in operation. PMID:24834365

  8. Voice Based City Panic Button System

    NASA Astrophysics Data System (ADS)

    Febriansyah; Zainuddin, Zahir; Bachtiar Nappu, M.

    2018-03-01

    The development of voice activated panic button application aims to design faster early notification of hazardous condition in community to the nearest police by using speech as the detector where the current application still applies touch-combination on screen and use coordination of orders from control center then the early notification still takes longer time. The method used in this research was by using voice recognition as the user voice detection and haversine formula for the comparison of closest distance between the user and the police. This research was equipped with auto sms, which sent notification to the victim’s relatives, that was also integrated with Google Maps application (GMaps) as the map to the victim’s location. The results show that voice registration on the application reaches 100%, incident detection using speech recognition while the application is running is 94.67% in average, and the auto sms to the victim relatives reaches 100%.

  9. Application of virtual screening and molecular dynamics for the analysis of selectivity of inhibitors of HU proteins targeted to the DNA-recognition site

    NASA Astrophysics Data System (ADS)

    Talyzina, A. A.; Agapova, Yu. K.; Podshivalov, D. D.; Timofeev, V. I.; Sidorov-Biryukov, D. D.; Rakitina, T. V.

    2017-11-01

    DNA-Binding HU proteins are essential for the maintenance of genomic DNA supercoiling and compaction in prokaryotic cells and are promising pharmacological targets for the design of new antibacterial agents. The virtual screening for low-molecular-weight compounds capable of specifically interacting with the DNA-recognition loop of the HU protein from the mycoplasma Spiroplasma melliferum was performed. The ability of the initially selected ligands to form stable complexes with the protein target was assessed by molecular dynamics simulation. One compound, which forms an unstable complex, was eliminated by means of a combination of computational methods, resulting in a decrease in the number of compounds that will pass to the experimental test phase. This approach can be used to solve a wide range of problems related to the search for and validation of low-molecular-weight inhibitors specific for a particular protein target.

  10. Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven

    2016-08-01

    Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).

  11. Speech emotion recognition methods: A literature review

    NASA Astrophysics Data System (ADS)

    Basharirad, Babak; Moradhaseli, Mohammadreza

    2017-10-01

    Recently, attention of the emotional speech signals research has been boosted in human machine interfaces due to availability of high computation capability. There are many systems proposed in the literature to identify the emotional state through speech. Selection of suitable feature sets, design of a proper classifications methods and prepare an appropriate dataset are the main key issues of speech emotion recognition systems. This paper critically analyzed the current available approaches of speech emotion recognition methods based on the three evaluating parameters (feature set, classification of features, accurately usage). In addition, this paper also evaluates the performance and limitations of available methods. Furthermore, it highlights the current promising direction for improvement of speech emotion recognition systems.

  12. Multiple site receptor modeling with a minimal spanning tree combined with a Kohonen neural network

    NASA Astrophysics Data System (ADS)

    Hopke, Philip K.

    1999-12-01

    A combination of two pattern recognition methods has been developed that allows the generation of geographical emission maps form multivariate environmental data. In such a projection into a visually interpretable subspace by a Kohonen Self-Organizing Feature Map, the topology of the higher dimensional variables space can be preserved, but parts of the information about the correct neighborhood among the sample vectors will be lost. This can partly be compensated for by an additional projection of Prim's Minimal Spanning Tree into the trained neural network. This new environmental receptor modeling technique has been adapted for multiple sampling sites. The behavior of the method has been studied using simulated data. Subsequently, the method has been applied to mapping data sets from the Southern California Air Quality Study. The projection of a 17 chemical variables measured at up to 8 sampling sites provided a 2D, visually interpretable, geometrically reasonable arrangement of air pollution source sin the South Coast Air Basin.

  13. Basic refractory and slag management for petcoke carbon feedstock in gasifiers

    DOEpatents

    Kwong, Kyei-Sing; Bennett, James P; Nakano, Jinichiro

    2014-04-22

    The disclosure provides methods of operating a slagging gasifier using a carbon feedstock having a relatively high V.sub.2O.sub.5 to SiO.sub.2 ratio, such as petcoke. The disclosure generates a combined chemical composition in the feed mixture having less than 25 wt. % SiO.sub.2, greater than 20 wt. % V.sub.2O.sub.5, and greater than 20 wt. % CaO. The method takes advantage of a novel recognition that increased levels of SiO.sub.2 tend to decrease dissolution of the V.sub.2O.sub.3 which forms under the reducing conditions of the gasifier, and utilizes the CaO additive to establish a chemical phase equilibria comprised of lower melting compounds. The method further provides for control based on the presence of Al.sub.2O.sub.3 and FeO, and provides for a total combined chemical composition of greater than about 5 wt. % MgO for use with refractory linings comprised of MgO based refractory brick.

  14. The effects of reverberant self- and overlap-masking on speech recognition in cochlear implant listeners.

    PubMed

    Desmond, Jill M; Collins, Leslie M; Throckmorton, Chandra S

    2014-06-01

    Many cochlear implant (CI) listeners experience decreased speech recognition in reverberant environments [Kokkinakis et al., J. Acoust. Soc. Am. 129(5), 3221-3232 (2011)], which may be caused by a combination of self- and overlap-masking [Bolt and MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. Determining the extent to which these effects decrease speech recognition for CI listeners may influence reverberation mitigation algorithms. This study compared speech recognition with ideal self-masking mitigation, with ideal overlap-masking mitigation, and with no mitigation. Under these conditions, mitigating either self- or overlap-masking resulted in significant improvements in speech recognition for both normal hearing subjects utilizing an acoustic model and for CI listeners using their own devices.

  15. Young children's coding and storage of visual and verbal material.

    PubMed

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  16. Pattern Recognition for a Flight Dynamics Monte Carlo Simulation

    NASA Technical Reports Server (NTRS)

    Restrepo, Carolina; Hurtado, John E.

    2011-01-01

    The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.

  17. CNN: a speaker recognition system using a cascaded neural network.

    PubMed

    Zaki, M; Ghalwash, A; Elkouny, A A

    1996-05-01

    The main emphasis of this paper is to present an approach for combining supervised and unsupervised neural network models to the issue of speaker recognition. To enhance the overall operation and performance of recognition, the proposed strategy integrates the two techniques, forming one global model called the cascaded model. We first present a simple conventional technique based on the distance measured between a test vector and a reference vector for different speakers in the population. This particular distance metric has the property of weighting down the components in those directions along which the intraspeaker variance is large. The reason for presenting this method is to clarify the discrepancy in performance between the conventional and neural network approach. We then introduce the idea of using unsupervised learning technique, presented by the winner-take-all model, as a means of recognition. Due to several tests that have been conducted and in order to enhance the performance of this model, dealing with noisy patterns, we have preceded it with a supervised learning model--the pattern association model--which acts as a filtration stage. This work includes both the design and implementation of both conventional and neural network approaches to recognize the speakers templates--which are introduced to the system via a voice master card and preprocessed before extracting the features used in the recognition. The conclusion indicates that the system performance in case of neural network is better than that of the conventional one, achieving a smooth degradation in respect of noisy patterns, and higher performance in respect of noise-free patterns.

  18. Gimli: open source and high-performance biomedical name recognition

    PubMed Central

    2013-01-01

    Background Automatic recognition of biomedical names is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. In recent years, various solutions have been implemented to tackle this problem. However, limitations regarding system characteristics, customization and usability still hinder their wider application outside text mining research. Results We present Gimli, an open-source, state-of-the-art tool for automatic recognition of biomedical names. Gimli includes an extended set of implemented and user-selectable features, such as orthographic, morphological, linguistic-based, conjunctions and dictionary-based. A simple and fast method to combine different trained models is also provided. Gimli achieves an F-measure of 87.17% on GENETAG and 72.23% on JNLPBA corpus, significantly outperforming existing open-source solutions. Conclusions Gimli is an off-the-shelf, ready to use tool for named-entity recognition, providing trained and optimized models for recognition of biomedical entities from scientific text. It can be used as a command line tool, offering full functionality, including training of new models and customization of the feature set and model parameters through a configuration file. Advanced users can integrate Gimli in their text mining workflows through the provided library, and extend or adapt its functionalities. Based on the underlying system characteristics and functionality, both for final users and developers, and on the reported performance results, we believe that Gimli is a state-of-the-art solution for biomedical NER, contributing to faster and better research in the field. Gimli is freely available at http://bioinformatics.ua.pt/gimli. PMID:23413997

  19. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  20. Real-time traffic sign recognition based on a general purpose GPU and deep-learning

    PubMed Central

    Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea). PMID:28264011

Top