Science.gov

Sample records for activity recognition based

  1. Human body contour data based activity recognition.

    PubMed

    Myagmarbayar, Nergui; Yuki, Yoshida; Imamoglu, Nevrez; Gonzalez, Jose; Otake, Mihoko; Yu, Wenwei

    2013-01-01

    This research work is aimed to develop autonomous bio-monitoring mobile robots, which are capable of tracking and measuring patients' motions, recognizing the patients' behavior based on observation data, and providing calling for medical personnel in emergency situations in home environment. The robots to be developed will bring about cost-effective, safe and easier at-home rehabilitation to most motor-function impaired patients (MIPs). In our previous research, a full framework was established towards this research goal. In this research, we aimed at improving the human activity recognition by using contour data of the tracked human subject extracted from the depth images as the signal source, instead of the lower limb joint angle data used in the previous research, which are more likely to be affected by the motion of the robot and human subjects. Several geometric parameters, such as, the ratio of height to weight of the tracked human subject, and distance (pixels) between centroid points of upper and lower parts of human body, were calculated from the contour data, and used as the features for the activity recognition. A Hidden Markov Model (HMM) is employed to classify different human activities from the features. Experimental results showed that the human activity recognition could be achieved with a high correct rate. PMID:24111015

  2. Human body contour data based activity recognition.

    PubMed

    Myagmarbayar, Nergui; Yuki, Yoshida; Imamoglu, Nevrez; Gonzalez, Jose; Otake, Mihoko; Yu, Wenwei

    2013-01-01

    This research work is aimed to develop autonomous bio-monitoring mobile robots, which are capable of tracking and measuring patients' motions, recognizing the patients' behavior based on observation data, and providing calling for medical personnel in emergency situations in home environment. The robots to be developed will bring about cost-effective, safe and easier at-home rehabilitation to most motor-function impaired patients (MIPs). In our previous research, a full framework was established towards this research goal. In this research, we aimed at improving the human activity recognition by using contour data of the tracked human subject extracted from the depth images as the signal source, instead of the lower limb joint angle data used in the previous research, which are more likely to be affected by the motion of the robot and human subjects. Several geometric parameters, such as, the ratio of height to weight of the tracked human subject, and distance (pixels) between centroid points of upper and lower parts of human body, were calculated from the contour data, and used as the features for the activity recognition. A Hidden Markov Model (HMM) is employed to classify different human activities from the features. Experimental results showed that the human activity recognition could be achieved with a high correct rate.

  3. Tracheal activity recognition based on acoustic signals.

    PubMed

    Olubanjo, Temiloluwa; Ghovanloo, Maysam

    2014-01-01

    Tracheal activity recognition can play an important role in continuous health monitoring for wearable systems and facilitate the advancement of personalized healthcare. Neck-worn systems provide access to a unique set of health-related data that other wearable devices simply cannot obtain. Activities including breathing, chewing, clearing the throat, coughing, swallowing, speech and even heartbeat can be recorded from around the neck. In this paper, we explore tracheal activity recognition using a combination of promising acoustic features from related work and apply simplistic classifiers including K-NN and Naive Bayes. For wearable systems in which low power consumption is of primary concern, we show that with a sub-optimal sampling rate of 16 kHz, we have achieved average classification results in the range of 86.6% to 87.4% using 1-NN, 3-NN, 5-NN and Naive Bayes. All classifiers obtained the highest recognition rate in the range of 97.2% to 99.4% for speech classification. This is promising to mitigate privacy concerns associated with wearable systems interfering with the user's conversations.

  4. A Random Forest-based ensemble method for activity recognition.

    PubMed

    Feng, Zengtao; Mo, Lingfei; Li, Meng

    2015-01-01

    This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation. PMID:26737432

  5. Ontology-based improvement to human activity recognition

    NASA Astrophysics Data System (ADS)

    Tahmoush, David; Bonial, Claire

    2016-05-01

    Human activity recognition has often prioritized low-level features extracted from imagery or video over higher-level class attributes and ontologies because they have traditionally been more effective on small datasets. However, by including knowledge-driven associations between actions and attributes while recognizing the lower-level attributes with their temporal relationships, we can attempt a hybrid approach that is more easily extensible to much larger datasets. We demonstrate a combination of hard and soft features with a comparison factor that prioritizes one approach over the other with a relative weight. We then exhaustively search over the comparison factor to evaluate the performance of a hybrid human activity recognition approach in comparison to the base hard approach at 84% accuracy and the current state-of-the-art.

  6. Human activity recognition based on human shape dynamics

    NASA Astrophysics Data System (ADS)

    Cheng, Zhiqing; Mosher, Stephen; Cheng, Huaining; Webb, Timothy

    2013-05-01

    Human activity recognition based on human shape dynamics was investigated in this paper. The shape dynamics describe the spatial-temporal shape deformation of a human body during its movement and thus provide important information about the identity of a human subject and the motions performed by the subject. The dynamic shapes of four subjects in five activities (digging, jogging, limping, throwing, and walking) were created via 3-D motion replication. The Paquet Shape Descriptor (PSD) was used to describe subject shapes in each frame. The principal component analysis was performed on the calculated PSDs and principal components (PCs) were used to characterize PSDs. The PSD calculation was then reasonably approximated by its significant projections in the eigen-space formed by PCs and represented by the corresponding projection coefficients. As such, the dynamic human shapes for each activity were described by these projection coefficients, which in turn, along with their derivatives were used to form the feature vectors (attribute sets) for activity classification. Data mining technology was employed with six classification methods used. Seven attribute sets were evaluated with high classification accuracy attained for most of them. The results from this investigation illustrate the great potential of human shape dynamics for activity recognition.

  7. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  8. Exploring Techniques for Vision Based Human Activity Recognition: Methods, Systems, and Evaluation

    PubMed Central

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-01

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activities, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation of the performance of human activity recognition. PMID:23353144

  9. Exploring techniques for vision based human activity recognition: methods, systems, and evaluation.

    PubMed

    Xu, Xin; Tang, Jinshan; Zhang, Xiaolong; Liu, Xiaoming; Zhang, Hong; Qiu, Yimin

    2013-01-25

    With the wide applications of vision based intelligent systems, image and video analysis technologies have attracted the attention of researchers in the computer vision field. In image and video analysis, human activity recognition is an important research direction. By interpreting and understanding human activity, we can recognize and predict the occurrence of crimes and help the police or other agencies react immediately. In the past, a large number of papers have been published on human activity recognition in video and image sequences. In this paper, we provide a comprehensive survey of the recent development of the techniques, including methods, systems, and quantitative evaluation towards the performance of human activity recognition.

  10. Team activity recognition in Association Football using a Bag-of-Words-based method.

    PubMed

    Montoliu, Raúl; Martín-Félez, Raúl; Torres-Sospedra, Joaquín; Martínez-Usó, Adolfo

    2015-06-01

    In this paper, a new methodology is used to perform team activity recognition and analysis in Association Football. It is based on pattern recognition and machine learning techniques. In particular, a strategy based on the Bag-of-Words (BoW) technique is used to characterize short Football video clips that are used to explain the team's performance and to train advanced classifiers in automatic recognition of team activities. In addition to the neural network-based classifier, three more classifier families are tested: the k-Nearest Neighbor, the Support Vector Machine and the Random Forest. The results obtained show that the proposed methodology is able to explain the most common movements of a team and to perform the team activity recognition task with high accuracy when classifying three Football actions: Ball Possession, Quick Attack and Set Piece. Random Forest is the classifier obtaining the best classification results.

  11. A Variance Based Active Learning Approach for Named Entity Recognition

    NASA Astrophysics Data System (ADS)

    Hassanzadeh, Hamed; Keyvanpour, Mohammadreza

    The cost of manually annotating corpora is one of the significant issues in many text based tasks such as text mining, semantic annotation and generally information extraction. Active Learning is an approach that deals with reduction of labeling costs. In this paper we proposed an effective active learning approach based on minimal variance that reduces manual annotation cost by using a small number of manually labeled examples. In our approach we use a confidence measure based on the model's variance that reaches a considerable accuracy for annotating entities. Conditional Random Field (CRF) is chosen as the underlying learning model due to its promising performance in many sequence labeling tasks. The experiments show that the proposed method needs considerably fewer manual labeled samples to produce a desirable result.

  12. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks.

    PubMed

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-07-05

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods.

  13. An adaptive Hidden Markov model for activity recognition based on a wearable multi-sensor device.

    PubMed

    Li, Zhen; Wei, Zhiqiang; Yue, Yaofeng; Wang, Hao; Jia, Wenyan; Burke, Lora E; Baranowski, Thomas; Sun, Mingui

    2015-05-01

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based on multi-sensor data is presented. In order to utilize these data efficiently and overcome the big data problem, an offline adaptive-Hidden Markov Model (HMM) is proposed. A sensor selection scheme is implemented based on an improved Viterbi algorithm. A new method is proposed that incorporates personal experience into the HMM model as a priori information. Experiments are conducted using a personal wearable computer eButton consisting of multiple sensors. Our comparative study with the standard HMM and other alternative methods in processing the eButton data have shown that our method is more robust and efficient, providing a useful tool to evaluate human activity and lifestyle.

  14. Clustering-based ensemble learning for activity recognition in smart homes.

    PubMed

    Jurek, Anna; Nugent, Chris; Bi, Yaxin; Wu, Shengli

    2014-01-01

    Application of sensor-based technology within activity monitoring systems is becoming a popular technique within the smart environment paradigm. Nevertheless, the use of such an approach generates complex constructs of data, which subsequently requires the use of intricate activity recognition techniques to automatically infer the underlying activity. This paper explores a cluster-based ensemble method as a new solution for the purposes of activity recognition within smart environments. With this approach activities are modelled as collections of clusters built on different subsets of features. A classification process is performed by assigning a new instance to its closest cluster from each collection. Two different sensor data representations have been investigated, namely numeric and binary. Following the evaluation of the proposed methodology it has been demonstrated that the cluster-based ensemble method can be successfully applied as a viable option for activity recognition. Results following exposure to data collected from a range of activities indicated that the ensemble method had the ability to perform with accuracies of 94.2% and 97.5% for numeric and binary data, respectively. These results outperformed a range of single classifiers considered as benchmarks.

  15. Smartphone-Based Patients' Activity Recognition by Using a Self-Learning Scheme for Medical Monitoring.

    PubMed

    Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong

    2016-06-01

    Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges.

  16. Smartphone-Based Patients' Activity Recognition by Using a Self-Learning Scheme for Medical Monitoring.

    PubMed

    Guo, Junqi; Zhou, Xi; Sun, Yunchuan; Ping, Gong; Zhao, Guoxing; Li, Zhuorong

    2016-06-01

    Smartphone based activity recognition has recently received remarkable attention in various applications of mobile health such as safety monitoring, fitness tracking, and disease prediction. To achieve more accurate and simplified medical monitoring, this paper proposes a self-learning scheme for patients' activity recognition, in which a patient only needs to carry an ordinary smartphone that contains common motion sensors. After the real-time data collection though this smartphone, we preprocess the data using coordinate system transformation to eliminate phone orientation influence. A set of robust and effective features are then extracted from the preprocessed data. Because a patient may inevitably perform various unpredictable activities that have no apriori knowledge in the training dataset, we propose a self-learning activity recognition scheme. The scheme determines whether there are apriori training samples and labeled categories in training pools that well match with unpredictable activity data. If not, it automatically assembles these unpredictable samples into different clusters and gives them new category labels. These clustered samples combined with the acquired new category labels are then merged into the training dataset to reinforce recognition ability of the self-learning model. In experiments, we evaluate our scheme using the data collected from two postoperative patient volunteers, including six labeled daily activities as the initial apriori categories in the training pool. Experimental results demonstrate that the proposed self-learning scheme for activity recognition works very well for most cases. When there exist several types of unseen activities without any apriori information, the accuracy reaches above 80 % after the self-learning process converges. PMID:27106584

  17. Activity reductions in perirhinal cortex predict conceptual priming and familiarity-based recognition

    PubMed Central

    Wang, Wei-chun; Ranganath, Charan; Yonelinas, Andrew P

    2013-01-01

    Although it is well established that regions in the medial temporal lobes are critical for explicit memory, recent work has suggested that one medial temporal lobe subregion – the perirhinal cortex (PRC) – may also support conceptual priming, a form of implicit memory. Here, we sought to investigate whether activity reductions in PRC, previously linked to familiarity-based recognition, might also support conceptual implicit memory retrieval. Using a free association priming task, the current study tested the prediction that PRC indexes conceptual priming independent of contributions from perceptual and response repetition. Participants first completed an incidental semantic encoding task outside of the MRI scanner. Next, they were scanned during performance of a free association priming task, followed by a recognition memory test. Results indicated successful conceptual priming was associated with decreased PRC activity, and that an overlapping region within the PRC also exhibited activity reductions that covaried with familiarity during the recognition memory test. Our results demonstrate that the PRC contributes to both conceptual priming and familiarity-based recognition, which may reflect a common role of this region in implicit and explicit memory retrieval. PMID:24157537

  18. Activity reductions in perirhinal cortex predict conceptual priming and familiarity-based recognition.

    PubMed

    Wang, Wei-Chun; Ranganath, Charan; Yonelinas, Andrew P

    2014-01-01

    Although it is well established that regions in the medial temporal lobes are critical for explicit memory, recent work has suggested that one medial temporal lobe subregion--the perirhinal cortex (PRC)--may also support conceptual priming, a form of implicit memory. Here, we sought to investigate whether activity reductions in PRC, previously linked to familiarity-based recognition, might also support conceptual implicit memory retrieval. Using a free association priming task, the current study tested the prediction that PRC indexes conceptual priming independent of contributions from perceptual and response repetition. Participants first completed an incidental semantic encoding task outside of the MRI scanner. Next, they were scanned during performance of a free association priming task, followed by a recognition memory test. Results indicated successful conceptual priming was associated with decreased PRC activity, and that an overlapping region within the PRC also exhibited activity reductions that covaried with familiarity during the recognition memory test. Our results demonstrate that the PRC contributes to both conceptual priming and familiarity-based recognition, which may reflect a common role of this region in implicit and explicit memory retrieval.

  19. Video-based convolutional neural networks for activity recognition from robot-centric videos

    NASA Astrophysics Data System (ADS)

    Ryoo, M. S.; Matthies, Larry

    2016-05-01

    In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.

  20. The application of EMD in activity recognition based on a single triaxial accelerometer.

    PubMed

    Liao, Mengjia; Guo, Yi; Qin, Yajie; Wang, Yuanyuan

    2015-01-01

    Activities recognition using a wearable device is a very popular research field. Among all wearable sensors, the accelerometer is one of the most common sensors due to its versatility and relative ease of use. This paper proposes a novel method for activity recognition based on a single accelerometer. To process the activity information from accelerometer data, two kinds of signal features are extracted. Firstly, five features including the mean, the standard deviation, the entropy, the energy and the correlation are calculated. Then a method called empirical mode decomposition (EMD) is used for the feature extraction since accelerometer data are non-linear and non-stationary. Several time series named intrinsic mode functions (IMFs) can be obtained after the EMD. Additional features will be added by computing the mean and standard deviation of first three IMFs. A classifier called Adaboost is adopted for the final activities recognition. In the experiments, a single sensor is separately positioned in the waist, left thigh, right ankle and right arm. Results show that the classification accuracy is 94.69%, 86.53%, 91.84% and 92.65%, respectively. These relatively high performances demonstrate that activities can be detected irrespective of the position by reducing problems such as the movement constrain and discomfort.

  1. Age differences in hippocampal activation during gist-based false recognition.

    PubMed

    Paige, Laura E; Cassidy, Brittany S; Schacter, Daniel L; Gutchess, Angela H

    2016-10-01

    Age-related increases in reliance on gist-based processes can cause increased false recognition. Understanding the neural basis for this increase helps to elucidate a mechanism underlying this vulnerability in memory. We assessed age differences in gist-based false memory by increasing image set size at encoding, thereby increasing the rate of false alarms. False alarms during a recognition test elicited increased hippocampal activity for older adults as compared to younger adults for the small set sizes, whereas the age groups had similar hippocampal activation for items associated with larger set sizes. Interestingly, younger adults had stronger connectivity between the hippocampus and posterior temporal regions relative to older adults during false alarms for items associated with large versus small set sizes. With increased gist, younger adults might rely more on additional processes (e.g., semantic associations) during recognition than older adults. Parametric modulation revealed that younger adults had increased anterior cingulate activity than older adults with decreasing set size, perhaps indicating difficulty in using monitoring processes in error-prone situations. PMID:27460152

  2. Age differences in hippocampal activation during gist-based false recognition.

    PubMed

    Paige, Laura E; Cassidy, Brittany S; Schacter, Daniel L; Gutchess, Angela H

    2016-10-01

    Age-related increases in reliance on gist-based processes can cause increased false recognition. Understanding the neural basis for this increase helps to elucidate a mechanism underlying this vulnerability in memory. We assessed age differences in gist-based false memory by increasing image set size at encoding, thereby increasing the rate of false alarms. False alarms during a recognition test elicited increased hippocampal activity for older adults as compared to younger adults for the small set sizes, whereas the age groups had similar hippocampal activation for items associated with larger set sizes. Interestingly, younger adults had stronger connectivity between the hippocampus and posterior temporal regions relative to older adults during false alarms for items associated with large versus small set sizes. With increased gist, younger adults might rely more on additional processes (e.g., semantic associations) during recognition than older adults. Parametric modulation revealed that younger adults had increased anterior cingulate activity than older adults with decreasing set size, perhaps indicating difficulty in using monitoring processes in error-prone situations.

  3. The research of multi-frame target recognition based on laser active imaging

    NASA Astrophysics Data System (ADS)

    Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan

    2013-09-01

    Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.

  4. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks

    PubMed Central

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-01-01

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods. PMID:27399696

  5. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks.

    PubMed

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-01-01

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods. PMID:27399696

  6. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    PubMed

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM.

  7. Active destabilization of base pairs by a DNA glycosylase wedge initiates damage recognition

    PubMed Central

    Kuznetsov, Nikita A.; Bergonzo, Christina; Campbell, Arthur J.; Li, Haoquan; Mechetin, Grigory V.; de los Santos, Carlos; Grollman, Arthur P.; Fedorova, Olga S.; Zharkov, Dmitry O.; Simmerling, Carlos

    2015-01-01

    Formamidopyrimidine-DNA glycosylase (Fpg) excises 8-oxoguanine (oxoG) from DNA but ignores normal guanine. We combined molecular dynamics simulation and stopped-flow kinetics with fluorescence detection to track the events in the recognition of oxoG by Fpg and its mutants with a key phenylalanine residue, which intercalates next to the damaged base, changed to either alanine (F110A) or fluorescent reporter tryptophan (F110W). Guanine was sampled by Fpg, as evident from the F110W stopped-flow traces, but less extensively than oxoG. The wedgeless F110A enzyme could bend DNA but failed to proceed further in oxoG recognition. Modeling of the base eversion with energy decomposition suggested that the wedge destabilizes the intrahelical base primarily through buckling both surrounding base pairs. Replacement of oxoG with abasic (AP) site rescued the activity, and calculations suggested that wedge insertion is not required for AP site destabilization and eversion. Our results suggest that Fpg, and possibly other DNA glycosylases, convert part of the binding energy into active destabilization of their substrates, using the energy differences between normal and damaged bases for fast substrate discrimination. PMID:25520195

  8. Accelerometer signal-based human activity recognition using augmented autoregressive model coefficients and artificial neural nets.

    PubMed

    Khan, A M; Lee, Y K; Kim, T S

    2008-01-01

    Automatic recognition of human activities is one of the important and challenging research areas in proactive and ubiquitous computing. In this work, we present some preliminary results of recognizing human activities using augmented features extracted from the activity signals measured using a single triaxial accelerometer sensor and artificial neural nets. The features include autoregressive (AR) modeling coefficients of activity signals, signal magnitude areas (SMA), and title angles (TA). We have recognized four human activities using AR coefficients (ARC) only, ARC with SMA, and ARC with SMA and TA. With the last augmented features, we have achieved the recognition rate above 99% for all four activities including lying, standing, walking, and running. With our proposed technique, real time recognition of some human activities is possible.

  9. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera.

    PubMed

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D; Sclabassi, Robert J; Mao, Zhi-Hong; Sun, Mingui

    2011-06-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances.

  10. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  11. A triaxial accelerometer-based physical-activity recognition via augmented-signal features and a hierarchical recognizer.

    PubMed

    Khan, Adil Mehmood; Lee, Young-Koo; Lee, Sungyoung Y; Kim, Tae-Seong

    2010-09-01

    Physical-activity recognition via wearable sensors can provide valuable information regarding an individual's degree of functional ability and lifestyle. In this paper, we present an accelerometer sensor-based approach for human-activity recognition. Our proposed recognition method uses a hierarchical scheme. At the lower level, the state to which an activity belongs, i.e., static, transition, or dynamic, is recognized by means of statistical signal features and artificial-neural nets (ANNs). The upper level recognition uses the autoregressive (AR) modeling of the acceleration signals, thus, incorporating the derived AR-coefficients along with the signal-magnitude area and tilt angle to form an augmented-feature vector. The resulting feature vector is further processed by the linear-discriminant analysis and ANNs to recognize a particular human activity. Our proposed activity-recognition method recognizes three states and 15 activities with an average accuracy of 97.9% using only a single triaxial accelerometer attached to the subject's chest.

  12. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    PubMed Central

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-01-01

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital. PMID:24991942

  13. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    PubMed

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  14. A Fuzzy Logic Prompting Mechanism Based on Pattern Recognition and Accumulated Activity Effective Index Using a Smartphone Embedded Sensor.

    PubMed

    Liu, Chung-Tse; Chan, Chia-Tai

    2016-01-01

    Sufficient physical activity can reduce many adverse conditions and contribute to a healthy life. Nevertheless, inactivity is prevalent on an international scale. Improving physical activity is an essential concern for public health. Reminders that help people change their health behaviors are widely applied in health care services. However, timed-based reminders deliver periodic prompts suffer from flexibility and dependency issues which may decrease prompt effectiveness. We propose a fuzzy logic prompting mechanism, Accumulated Activity Effective Index Reminder (AAEIReminder), based on pattern recognition and activity effective analysis to manage physical activity. AAEIReminder recognizes activity levels using a smartphone-embedded sensor for pattern recognition and analyzing the amount of physical activity in activity effective analysis. AAEIReminder can infer activity situations such as the amount of physical activity and days spent exercising through fuzzy logic, and decides whether a prompt should be delivered to a user. This prompting system was implemented in smartphones and was used in a short-term real-world trial by seventeenth participants for validation. The results demonstrated that the AAEIReminder is feasible. The fuzzy logic prompting mechanism can deliver prompts automatically based on pattern recognition and activity effective analysis. AAEIReminder provides flexibility which may increase the prompts' efficiency. PMID:27548184

  15. A Fuzzy Logic Prompting Mechanism Based on Pattern Recognition and Accumulated Activity Effective Index Using a Smartphone Embedded Sensor.

    PubMed

    Liu, Chung-Tse; Chan, Chia-Tai

    2016-08-19

    Sufficient physical activity can reduce many adverse conditions and contribute to a healthy life. Nevertheless, inactivity is prevalent on an international scale. Improving physical activity is an essential concern for public health. Reminders that help people change their health behaviors are widely applied in health care services. However, timed-based reminders deliver periodic prompts suffer from flexibility and dependency issues which may decrease prompt effectiveness. We propose a fuzzy logic prompting mechanism, Accumulated Activity Effective Index Reminder (AAEIReminder), based on pattern recognition and activity effective analysis to manage physical activity. AAEIReminder recognizes activity levels using a smartphone-embedded sensor for pattern recognition and analyzing the amount of physical activity in activity effective analysis. AAEIReminder can infer activity situations such as the amount of physical activity and days spent exercising through fuzzy logic, and decides whether a prompt should be delivered to a user. This prompting system was implemented in smartphones and was used in a short-term real-world trial by seventeenth participants for validation. The results demonstrated that the AAEIReminder is feasible. The fuzzy logic prompting mechanism can deliver prompts automatically based on pattern recognition and activity effective analysis. AAEIReminder provides flexibility which may increase the prompts' efficiency.

  16. A Fuzzy Logic Prompting Mechanism Based on Pattern Recognition and Accumulated Activity Effective Index Using a Smartphone Embedded Sensor

    PubMed Central

    Liu, Chung-Tse; Chan, Chia-Tai

    2016-01-01

    Sufficient physical activity can reduce many adverse conditions and contribute to a healthy life. Nevertheless, inactivity is prevalent on an international scale. Improving physical activity is an essential concern for public health. Reminders that help people change their health behaviors are widely applied in health care services. However, timed-based reminders deliver periodic prompts suffer from flexibility and dependency issues which may decrease prompt effectiveness. We propose a fuzzy logic prompting mechanism, Accumulated Activity Effective Index Reminder (AAEIReminder), based on pattern recognition and activity effective analysis to manage physical activity. AAEIReminder recognizes activity levels using a smartphone-embedded sensor for pattern recognition and analyzing the amount of physical activity in activity effective analysis. AAEIReminder can infer activity situations such as the amount of physical activity and days spent exercising through fuzzy logic, and decides whether a prompt should be delivered to a user. This prompting system was implemented in smartphones and was used in a short-term real-world trial by seventeenth participants for validation. The results demonstrated that the AAEIReminder is feasible. The fuzzy logic prompting mechanism can deliver prompts automatically based on pattern recognition and activity effective analysis. AAEIReminder provides flexibility which may increase the prompts’ efficiency. PMID:27548184

  17. Recognition of human activity characteristics based on state transitions modeling technique

    NASA Astrophysics Data System (ADS)

    Elangovan, Vinayak; Shirkhodaie, Amir

    2012-06-01

    Human Activity Discovery & Recognition (HADR) is a complex, diverse and challenging task but yet an active area of ongoing research in the Department of Defense. By detecting, tracking, and characterizing cohesive Human interactional activity patterns, potential threats can be identified which can significantly improve situation awareness, particularly, in Persistent Surveillance Systems (PSS). Understanding the nature of such dynamic activities, inevitably involves interpretation of a collection of spatiotemporally correlated activities with respect to a known context. In this paper, we present a State Transition model for recognizing the characteristics of human activities with a link to a prior contextbased ontology. Modeling the state transitions between successive evidential events determines the activities' temperament. The proposed state transition model poses six categories of state transitions including: Human state transitions of Object handling, Visibility, Entity-entity relation, Human Postures, Human Kinematics and Distance to Target. The proposed state transition model generates semantic annotations describing the human interactional activities via a technique called Casual Event State Inference (CESI). The proposed approach uses a low cost kinect depth camera for indoor and normal optical camera for outdoor monitoring activities. Experimental results are presented here to demonstrate the effectiveness and efficiency of the proposed technique.

  18. Recognition of Daily Activity in Living Space based on Indoor Ambient Atmosphere and Acquiring Localized Information for Improvement of Recognition Accuracy

    NASA Astrophysics Data System (ADS)

    Hirasawa, Kazuki; Sawada, Shinya; Saitoh, Atsushi

    The system watching over elder's life is very important in a super-aged society Japan. In this paper, we describe a method to recognize resident's daily activities by means of using the information of indoor ambient atmosphere changes. The measuring targets of environmental changes are of gas and smell, temperature, humidity, and brightness. Those changes have much relation with resident's daily activities. The measurement system with 7 sensors (4 gas sensors, a thermistor, humidity sensor, and CdS light sensor) was developed for getting indoor ambient atmosphere changes. Some measurements were done in a one-room type residential space. 21 dimensional activity vectors were composed for each daily activity from acquired data. Those vectors were classified into 9 categories that were main activities by using Self-Organizing Map (SOM) method. From the result, it was found that the recognition of main daily activities based on information on indoor ambient atmosphere changes is possible. Moreover, we also describe the method for getting information of local gas and smell environmental changes. Gas and smell environmental changes are related with daily activities, especially very important action, eating and drinking. And, local information enables the relation of the place and the activity. For such a purpose, a gas sensing module with the operation function that synchronizes with human detection signal was developed and evaluated. From the result, the sensor module had the ability to acquire and to emphasize local gas environment changes caused by the person's activity.

  19. Object recognition by active fusion

    NASA Astrophysics Data System (ADS)

    Prantl, Manfred; Kopp-Borotschnig, Hermann; Ganster, Harald; Sinclair, David; Pinz, Axel J.

    1996-10-01

    Today's computer vision applications often have to deal with multiple, uncertain, and incomplete visual information. In this paper, we apply a new method, termed 'active fusion', to the problem of generic object recognition. Active fusion provides a common framework for active selection and combination of information from multiple sources in order to arrive at a reliable result at reasonable costs. In our experimental setup we use a camera mounted on a 2m by 1.5m x/z-table observing objects placed on a rotating table. Zoom, pan, tilt, and aperture setting of the camera can be controlled by the system. We follow a part-based approach, trying to decompose objects into parts, which are modeled as geons. The active fusion system starts from an initial view of the objects placed on the table and is continuously trying to refine its current object hypotheses by requesting additional views. The implementation of active fusion on the basis of probability theory, Dempster-Shafer's theory of evidence and fuzzy set theory is discussed. First results demonstrating segmentation improvements by active fusion are presented.

  20. Function-based classification of carbohydrate-active enzymes by recognition of short, conserved peptide motifs.

    PubMed

    Busk, Peter Kamp; Lange, Lene

    2013-06-01

    Functional prediction of carbohydrate-active enzymes is difficult due to low sequence identity. However, similar enzymes often share a few short motifs, e.g., around the active site, even when the overall sequences are very different. To exploit this notion for functional prediction of carbohydrate-active enzymes, we developed a simple algorithm, peptide pattern recognition (PPR), that can divide proteins into groups of sequences that share a set of short conserved sequences. When this method was used on 118 glycoside hydrolase 5 proteins with 9% average pairwise identity and representing four characterized enzymatic functions, 97% of the proteins were sorted into groups correlating with their enzymatic activity. Furthermore, we analyzed 8,138 glycoside hydrolase 13 proteins including 204 experimentally characterized enzymes with 28 different functions. There was a 91% correlation between group and enzyme activity. These results indicate that the function of carbohydrate-active enzymes can be predicted with high precision by finding short, conserved motifs in their sequences. The glycoside hydrolase 61 family is important for fungal biomass conversion, but only a few proteins of this family have been functionally characterized. Interestingly, PPR divided 743 glycoside hydrolase 61 proteins into 16 subfamilies useful for targeted investigation of the function of these proteins and pinpointed three conserved motifs with putative importance for enzyme activity. Furthermore, the conserved sequences were useful for cloning of new, subfamily-specific glycoside hydrolase 61 proteins from 14 fungi. In conclusion, identification of conserved sequence motifs is a new approach to sequence analysis that can predict carbohydrate-active enzyme functions with high precision. PMID:23524681

  1. Function-based classification of carbohydrate-active enzymes by recognition of short, conserved peptide motifs.

    PubMed

    Busk, Peter Kamp; Lange, Lene

    2013-06-01

    Functional prediction of carbohydrate-active enzymes is difficult due to low sequence identity. However, similar enzymes often share a few short motifs, e.g., around the active site, even when the overall sequences are very different. To exploit this notion for functional prediction of carbohydrate-active enzymes, we developed a simple algorithm, peptide pattern recognition (PPR), that can divide proteins into groups of sequences that share a set of short conserved sequences. When this method was used on 118 glycoside hydrolase 5 proteins with 9% average pairwise identity and representing four characterized enzymatic functions, 97% of the proteins were sorted into groups correlating with their enzymatic activity. Furthermore, we analyzed 8,138 glycoside hydrolase 13 proteins including 204 experimentally characterized enzymes with 28 different functions. There was a 91% correlation between group and enzyme activity. These results indicate that the function of carbohydrate-active enzymes can be predicted with high precision by finding short, conserved motifs in their sequences. The glycoside hydrolase 61 family is important for fungal biomass conversion, but only a few proteins of this family have been functionally characterized. Interestingly, PPR divided 743 glycoside hydrolase 61 proteins into 16 subfamilies useful for targeted investigation of the function of these proteins and pinpointed three conserved motifs with putative importance for enzyme activity. Furthermore, the conserved sequences were useful for cloning of new, subfamily-specific glycoside hydrolase 61 proteins from 14 fungi. In conclusion, identification of conserved sequence motifs is a new approach to sequence analysis that can predict carbohydrate-active enzyme functions with high precision.

  2. Context based gait recognition

    NASA Astrophysics Data System (ADS)

    Bazazian, Shermin; Gavrilova, Marina

    2012-06-01

    Gait recognition has recently become a popular topic in the field of biometrics. However, the main hurdle is the insufficient recognition rate in the presence of low quality samples. The main focus of this paper is to investigate how the performance of a gait recognition system can be improved using additional information about behavioral patterns of users and the context in which samples have been taken. The obtained results show combining the context information with biometric data improves the performance of the system at a very low cost. The amount of improvement depends on the distinctiveness of the behavioral patterns and the quality of the gait samples. Using the appropriate distinctive behavioral models it is possible to achieve a 100% recognition rate.

  3. Physical activity recognition based on rotated acceleration data using quaternion in sedentary behavior: a preliminary study.

    PubMed

    Shin, Y E; Choi, W H; Shin, T M

    2014-01-01

    This paper suggests a physical activity assessment method based on quaternion. To reduce user inconvenience, we measured the activity using a mobile device which is not put on fixed position. Recognized results were verified with various machine learning algorithms, such as neural network (multilayer perceptron), decision tree (J48), SVM (support vector machine) and naive bayes classifier. All algorithms have shown over 97% accuracy including decision tree (J48), which recognized the activity with 98.35% accuracy. As a result, physical activity assessment method based on rotated acceleration using quaternion can classify sedentary behavior with more accuracy without considering devices' position and orientation. PMID:25571109

  4. Evaluation of a Smartphone-based Human Activity Recognition System in a Daily Living Environment.

    PubMed

    Lemaire, Edward D; Tundo, Marco D; Baddour, Natalie

    2015-01-01

    An evaluation method that includes continuous activities in a daily-living environment was developed for Wearable Mobility Monitoring Systems (WMMS) that attempt to recognize user activities. Participants performed a pre-determined set of daily living actions within a continuous test circuit that included mobility activities (walking, standing, sitting, lying, ascending/descending stairs), daily living tasks (combing hair, brushing teeth, preparing food, eating, washing dishes), and subtle environment changes (opening doors, using an elevator, walking on inclines, traversing staircase landings, walking outdoors). To evaluate WMMS performance on this circuit, fifteen able-bodied participants completed the tasks while wearing a smartphone at their right front pelvis. The WMMS application used smartphone accelerometer and gyroscope signals to classify activity states. A gold standard comparison data set was created by video-recording each trial and manually logging activity onset times. Gold standard and WMMS data were analyzed offline. Three classification sets were calculated for each circuit: (i) mobility or immobility, ii) sit, stand, lie, or walking, and (iii) sit, stand, lie, walking, climbing stairs, or small standing movement. Sensitivities, specificities, and F-Scores for activity categorization and changes-of-state were calculated. The mobile versus immobile classification set had a sensitivity of 86.30% ± 7.2% and specificity of 98.96% ± 0.6%, while the second prediction set had a sensitivity of 88.35% ± 7.80% and specificity of 98.51% ± 0.62%. For the third classification set, sensitivity was 84.92% ± 6.38% and specificity was 98.17 ± 0.62. F1 scores for the first, second and third classification sets were 86.17 ± 6.3, 80.19 ± 6.36, and 78.42 ± 5.96, respectively. This demonstrates that WMMS performance depends on the evaluation protocol in addition to the algorithms. The demonstrated protocol can be used and tailored for evaluating human activity

  5. Pattern activation/recognition theory of mind.

    PubMed

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

  6. Pattern activation/recognition theory of mind

    PubMed Central

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a “Pattern Recognition Theory of Mind” that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call “Pattern Activation/Recognition Theory of Mind.” While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation. PMID:26236228

  7. Pattern activation/recognition theory of mind.

    PubMed

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation. PMID:26236228

  8. Activity recognition from video using layered approach

    NASA Astrophysics Data System (ADS)

    McPherson, Charles A.; Irvine, John M.; Young, Mon; Stefanidis, Anthony

    2012-01-01

    The adversary in current threat situations can no longer be identified by what they are, but by what they are doing. This has lead to a large increase in the use of video surveillance systems for security and defense applications. With the quantity of video surveillance at the disposal of organizations responsible for protecting military and civilian lives comes issues regarding the storage and screening the data for events and activities of interest. Activity recognition from video for such applications seeks to develop automated screening of video based upon the recognition of activities of interest rather than merely the presence of specific persons or vehicle classes developed for the Cold War problem of "Find the T72 Tank". This paper explores numerous approaches to activity recognition, all of which examine heuristic, semantic, and syntactic methods based upon tokens derived from the video. The proposed architecture discussed herein uses a multi-level approach that divides the problem into three or more tiers of recognition, each employing different techniques according to their appropriateness to strengths at each tier using heuristics, syntactic recognition, and HMM's of token strings to form higher level interpretations.

  9. Visual pattern recognition based on spatio-temporal patterns of retinal ganglion cells’ activities

    PubMed Central

    Jing, Wei; Liu, Wen-Zhong; Gong, Xin-Wei; Gong, Hai-Qing

    2010-01-01

    Neural information is processed based on integrated activities of relevant neurons. Concerted population activity is one of the important ways for retinal ganglion cells to efficiently organize and process visual information. In the present study, the spike activities of bullfrog retinal ganglion cells in response to three different visual patterns (checker-board, vertical gratings and horizontal gratings) were recorded using multi-electrode arrays. A measurement of subsequence distribution discrepancy (MSDD) was applied to identify the spatio-temporal patterns of retinal ganglion cells’ activities in response to different stimulation patterns. The results show that the population activity patterns were different in response to different stimulation patterns, such difference in activity pattern was consistently detectable even when visual adaptation occurred during repeated experimental trials. Therefore, the stimulus pattern can be reliably discriminated according to the spatio-temporal pattern of the neuronal activities calculated using the MSDD algorithm. PMID:21886670

  10. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  11. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  12. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients.

    PubMed

    Capela, Nicole A; Lemaire, Edward D; Baddour, Natalie

    2015-01-01

    Human activity recognition (HAR), using wearable sensors, is a growing area with the potential to provide valuable information on patient mobility to rehabilitation specialists. Smartphones with accelerometer and gyroscope sensors are a convenient, minimally invasive, and low cost approach for mobility monitoring. HAR systems typically pre-process raw signals, segment the signals, and then extract features to be used in a classifier. Feature selection is a crucial step in the process to reduce potentially large data dimensionality and provide viable parameters to enable activity classification. Most HAR systems are customized to an individual research group, including a unique data set, classes, algorithms, and signal features. These data sets are obtained predominantly from able-bodied participants. In this paper, smartphone accelerometer and gyroscope sensor data were collected from populations that can benefit from human activity recognition: able-bodied, elderly, and stroke patients. Data from a consecutive sequence of 41 mobility tasks (18 different tasks) were collected for a total of 44 participants. Seventy-six signal features were calculated and subsets of these features were selected using three filter-based, classifier-independent, feature selection methods (Relief-F, Correlation-based Feature Selection, Fast Correlation Based Filter). The feature subsets were then evaluated using three generic classifiers (Naïve Bayes, Support Vector Machine, j48 Decision Tree). Common features were identified for all three populations, although the stroke population subset had some differences from both able-bodied and elderly sets. Evaluation with the three classifiers showed that the feature subsets produced similar or better accuracies than classification with the entire feature set. Therefore, since these feature subsets are classifier-independent, they should be useful for developing and improving HAR systems across and within populations.

  13. Feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients.

    PubMed

    Capela, Nicole A; Lemaire, Edward D; Baddour, Natalie

    2015-01-01

    Human activity recognition (HAR), using wearable sensors, is a growing area with the potential to provide valuable information on patient mobility to rehabilitation specialists. Smartphones with accelerometer and gyroscope sensors are a convenient, minimally invasive, and low cost approach for mobility monitoring. HAR systems typically pre-process raw signals, segment the signals, and then extract features to be used in a classifier. Feature selection is a crucial step in the process to reduce potentially large data dimensionality and provide viable parameters to enable activity classification. Most HAR systems are customized to an individual research group, including a unique data set, classes, algorithms, and signal features. These data sets are obtained predominantly from able-bodied participants. In this paper, smartphone accelerometer and gyroscope sensor data were collected from populations that can benefit from human activity recognition: able-bodied, elderly, and stroke patients. Data from a consecutive sequence of 41 mobility tasks (18 different tasks) were collected for a total of 44 participants. Seventy-six signal features were calculated and subsets of these features were selected using three filter-based, classifier-independent, feature selection methods (Relief-F, Correlation-based Feature Selection, Fast Correlation Based Filter). The feature subsets were then evaluated using three generic classifiers (Naïve Bayes, Support Vector Machine, j48 Decision Tree). Common features were identified for all three populations, although the stroke population subset had some differences from both able-bodied and elderly sets. Evaluation with the three classifiers showed that the feature subsets produced similar or better accuracies than classification with the entire feature set. Therefore, since these feature subsets are classifier-independent, they should be useful for developing and improving HAR systems across and within populations. PMID:25885272

  14. Active place recognition using image signatures

    NASA Astrophysics Data System (ADS)

    Engelson, Sean P.

    1992-11-01

    For reliable navigation, a mobile robot needs to be able to recognize where it is in the world. We previously described an efficient and effective image-based representation of perceptual information for place recognition. Each place is associated with a set of stored image signatures, each a matrix of numbers derived by evaluating some measurement functions over large blocks of pixels. One difficulty, though, is the large number of inherently ambiguous signatures which bloats the database and makes recognition more difficult. Furthermore, since small differences in orientation can produce very different images, reliable recognition requires many images. These problems can be ameliorated by using active methods to select the best signatures to use for the recognition. Two criteria for good images are distinctiveness (is the scene distinguishable from others?) and stability (how much do small viewpoint motions change image recognizability?). We formulate several heuristic distinctiveness metrics which are good predictors of real image distinctiveness. These functions are then used to direct the motion of the camera to find locally distinctive views for use in recognition. This method also produces some modicum of stability, since it uses a form of local optimization. We present the results of applying this method with a camera mounted on a pan-tilt platform.

  15. A Lightweight Hierarchical Activity Recognition Framework Using Smartphone Sensors

    PubMed Central

    Han, Manhyung; Bang, Jae Hun; Nugent, Chris; McClean, Sally; Lee, Sungyoung

    2014-01-01

    Activity recognition for the purposes of recognizing a user's intentions using multimodal sensors is becoming a widely researched topic largely based on the prevalence of the smartphone. Previous studies have reported the difficulty in recognizing life-logs by only using a smartphone due to the challenges with activity modeling and real-time recognition. In addition, recognizing life-logs is difficult due to the absence of an established framework which enables the use of different sources of sensor data. In this paper, we propose a smartphone-based Hierarchical Activity Recognition Framework which extends the Naïve Bayes approach for the processing of activity modeling and real-time activity recognition. The proposed algorithm demonstrates higher accuracy than the Naïve Bayes approach and also enables the recognition of a user's activities within a mobile environment. The proposed algorithm has the ability to classify fifteen activities with an average classification accuracy of 92.96%. PMID:25184486

  16. Child activity recognition based on cooperative fusion model of a triaxial accelerometer and a barometric pressure sensor.

    PubMed

    Nam, Yunyoung; Park, Jung Wook

    2013-03-01

    This paper presents a child activity recognition approach using a single 3-axis accelerometer and a barometric pressure sensor worn on a waist of the body to prevent child accidents such as unintentional injuries at home. Labeled accelerometer data are collected from children of both sexes up to the age of 16 to 29 months. To recognize daily activities, mean, standard deviation, and slope of time-domain features are calculated over sliding windows. In addition, the FFT analysis is adopted to extract frequency-domain features of the aggregated data, and then energy and correlation of acceleration data are calculated. Child activities are classified into 11 daily activities which are wiggling, rolling, standing still, standing up, sitting down, walking, toddling, crawling, climbing up, climbing down, and stopping. The overall accuracy of activity recognition was 98.43% using only a single- wearable triaxial accelerometer sensor and a barometric pressure sensor with a support vector machine.

  17. Automated recognition and characterization of solar active regions based on the SOHO/MDI images

    NASA Technical Reports Server (NTRS)

    Pap, J. M.; Turmon, M.; Mukhtar, S.; Bogart, R.; Ulrich, R.; Froehlich, C.; Wehrli, C.

    1997-01-01

    The first results of a new method to identify and characterize the various surface structures on the sun, which may contribute to the changes in solar total and spectral irradiance, are shown. The full disk magnetograms (1024 x 1024 pixels) of the Michelson Doppler Imager (MDI) experiment onboard SOHO are analyzed. Use of a Bayesian inference scheme allows objective, uniform, automated processing of a long sequence of images. The main goal is to identify the solar magnetic features causing irradiance changes. The results presented are based on a pilot time interval of August 1996.

  18. Window Size Impact in Human Activity Recognition

    PubMed Central

    Banos, Oresti; Galvez, Juan-Manuel; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2014-01-01

    Signal segmentation is a crucial stage in the activity recognition process; however, this has been rarely and vaguely characterized so far. Windowing approaches are normally used for segmentation, but no clear consensus exists on which window size should be preferably employed. In fact, most designs normally rely on figures used in previous works, but with no strict studies that support them. Intuitively, decreasing the window size allows for a faster activity detection, as well as reduced resources and energy needs. On the contrary, large data windows are normally considered for the recognition of complex activities. In this work, we present an extensive study to fairly characterize the windowing procedure, to determine its impact within the activity recognition process and to help clarify some of the habitual assumptions made during the recognition system design. To that end, some of the most widely used activity recognition procedures are evaluated for a wide range of window sizes and activities. From the evaluation, the interval 1–2 s proves to provide the best trade-off between recognition speed and accuracy. The study, specifically intended for on-body activity recognition systems, further provides designers with a set of guidelines devised to facilitate the system definition and configuration according to the particular application requirements and target activities. PMID:24721766

  19. Window size impact in human activity recognition.

    PubMed

    Banos, Oresti; Galvez, Juan-Manuel; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2014-01-01

    Signal segmentation is a crucial stage in the activity recognition process; however, this has been rarely and vaguely characterized so far. Windowing approaches are normally used for segmentation, but no clear consensus exists on which window size should be preferably employed. In fact, most designs normally rely on figures used in previous works, but with no strict studies that support them. Intuitively, decreasing the window size allows for a faster activity detection, as well as reduced resources and energy needs. On the contrary, large data windows are normally considered for the recognition of complex activities. In this work, we present an extensive study to fairly characterize the windowing procedure, to determine its impact within the activity recognition process and to help clarify some of the habitual assumptions made during the recognition system design. To that end, some of the most widely used activity recognition procedures are evaluated for a wide range of window sizes and activities. From the evaluation, the interval 1-2 s proves to provide the best trade-off between recognition speed and accuracy. The study, specifically intended for on-body activity recognition systems, further provides designers with a set of guidelines devised to facilitate the system definition and configuration according to the particular application requirements and target activities. PMID:24721766

  20. From estimating activation locality to predicting disorder: A review of pattern recognition for neuroimaging-based psychiatric diagnostics.

    PubMed

    Wolfers, Thomas; Buitelaar, Jan K; Beckmann, Christian F; Franke, Barbara; Marquand, Andre F

    2015-10-01

    Psychiatric disorders are increasingly being recognised as having a biological basis, but their diagnosis is made exclusively behaviourally. A promising approach for 'biomarker' discovery has been based on pattern recognition methods applied to neuroimaging data, which could yield clinical utility in future. In this review we survey the literature on pattern recognition for making diagnostic predictions in psychiatric disorders, and evaluate progress made in translating such findings towards clinical application. We evaluate studies on many criteria, including data modalities used, the types of features extracted and algorithm applied. We identify problems common to many studies, such as a relatively small sample size and a primary focus on estimating generalisability within a single study. Furthermore, we highlight challenges that are not widely acknowledged in the field including the importance of accommodating disease prevalence, the necessity of more extensive validation using large carefully acquired samples, the need for methodological innovations to improve accuracy and to discriminate between multiple disorders simultaneously. Finally, we identify specific clinical contexts in which pattern recognition can add value in the short to medium term.

  1. Active Finger Recognition from Surface EMG Signal Using Bayesian Filter

    NASA Astrophysics Data System (ADS)

    Araki, Nozomu; Hoashi, Yuki; Konishi, Yasuo; Mabuchi, Kunihiko; Ishigaki, Hiroyuki

    This paper proposed an active finger recognition method using Bayesian filter in order to control a myoelectric hand. We have previously proposed a finger joint angle estimation method based on measured surface electromyography (EMG) signals and a linear model. However, when we estimate 2 or more finger angles by this estimation method, the estimation angle of the inactive finger is not accurate. This is caused by interference of surface EMG signal. To solve this interference problem, we proposed active finger recognition method from the amplitude spectrum of surface EMG signal using Bayesian filter. To confirm the effectiveness of this recognition method, we developed a myoelectric hand simulator that implements proposed recognition algorithm and carried out real-time recognition experiment.

  2. Handling Real-World Context Awareness, Uncertainty and Vagueness in Real-Time Human Activity Tracking and Recognition with a Fuzzy Ontology-Based Hybrid Method

    PubMed Central

    Díaz-Rodríguez, Natalia; Cadahía, Olmo León; Cuéllar, Manuel Pegalajar; Lilius, Johan; Calvo-Flores, Miguel Delgado

    2014-01-01

    Human activity recognition is a key task in ambient intelligence applications to achieve proper ambient assisted living. There has been remarkable progress in this domain, but some challenges still remain to obtain robust methods. Our goal in this work is to provide a system that allows the modeling and recognition of a set of complex activities in real life scenarios involving interaction with the environment. The proposed framework is a hybrid model that comprises two main modules: a low level sub-activity recognizer, based on data-driven methods, and a high-level activity recognizer, implemented with a fuzzy ontology to include the semantic interpretation of actions performed by users. The fuzzy ontology is fed by the sub-activities recognized by the low level data-driven component and provides fuzzy ontological reasoning to recognize both the activities and their influence in the environment with semantics. An additional benefit of the approach is the ability to handle vagueness and uncertainty in the knowledge-based module, which substantially outperforms the treatment of incomplete and/or imprecise data with respect to classic crisp ontologies. We validate these advantages with the public CAD-120 dataset (Cornell Activity Dataset), achieving an accuracy of 90.1% and 91.07% for low-level and high-level activities, respectively. This entails an improvement over fully data-driven or ontology-based approaches. PMID:25268914

  3. Frequency-Based Fingerprint Recognition

    NASA Astrophysics Data System (ADS)

    Aguilar, Gualberto; Sánchez, Gabriel; Toscano, Karina; Pérez, Héctor

    abstract Fingerprint recognition is one of the most popular methods used for identification with greater success degree. Fingerprint has unique characteristics called minutiae, which are points where a curve track ends, intersects, or branches off. In this chapter a fingerprint recognition method is proposed in which a combination of Fast Fourier Transform (FFT) and Gabor filters is used for image enhancement. A novel recognition stage using local features for recognition is also proposed. Also a verification stage is introduced to be used when the system output has more than one person.

  4. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  5. Modal-Power-Based Haptic Motion Recognition

    NASA Astrophysics Data System (ADS)

    Kasahara, Yusuke; Shimono, Tomoyuki; Kuwahara, Hiroaki; Sato, Masataka; Ohnishi, Kouhei

    Motion recognition based on sensory information is important for providing assistance to human using robots. Several studies have been carried out on motion recognition based on image information. However, in the motion of humans contact with an object can not be evaluated precisely by image-based recognition. This is because the considering force information is very important for describing contact motion. In this paper, a modal-power-based haptic motion recognition is proposed; modal power is considered to reveal information on both position and force. Modal power is considered to be one of the defining features of human motion. A motion recognition algorithm based on linear discriminant analysis is proposed to distinguish between similar motions. Haptic information is extracted using a bilateral master-slave system. Then, the observed motion is decomposed in terms of primitive functions in a modal space. The experimental results show the effectiveness of the proposed method.

  6. Face recognition based tensor structure

    NASA Astrophysics Data System (ADS)

    Yang, De-qiang; Ye, Zhi-xia; Zhao, Yang; Liu, Li-mei

    2012-01-01

    Face recognition has broad applications, and it is a difficult problem since face image can change with photographic conditions, such as different illumination conditions, pose changes and camera angles. How to obtain some invariable features for a face image is the key issue for a face recognition algorithm. In this paper, a novel tensor structure of face image is proposed to represent image features with eight directions for a pixel value. The invariable feature of the face image is then obtained from gradient decomposition to make up the tensor structure. Then the singular value decomposition (SVD) and principal component analysis (PCA) of this tensor structure are used for face recognition. The experimental results from this study show that many difficultly recognized samples can correctly be recognized, and the recognition rate is increased by 9%-11% in comparison with same type of algorithms.

  7. Physical Human Activity Recognition Using Wearable Sensors.

    PubMed

    Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine

    2015-12-11

    This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.

  8. Physical Human Activity Recognition Using Wearable Sensors

    PubMed Central

    Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine

    2015-01-01

    This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors’ placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject. PMID:26690450

  9. Physical Human Activity Recognition Using Wearable Sensors.

    PubMed

    Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine

    2015-01-01

    This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject. PMID:26690450

  10. Implementation study of wearable sensors for activity recognition systems.

    PubMed

    Rezaie, Hamed; Ghassemian, Mona

    2015-08-01

    This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely 'stream-based', 'feature-based' and 'threshold-based' scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency.

  11. Low Energy Physical Activity Recognition System on Smartphones

    PubMed Central

    Morillo, Luis Miguel Soria; Gonzalez-Abril, Luis; Ramirez, Juan Antonio Ortega; de la Concepcion, Miguel Angel Alvarez

    2015-01-01

    An innovative approach to physical activity recognition based on the use of discrete variables obtained from accelerometer sensors is presented. The system first performs a discretization process for each variable, which allows efficient recognition of activities performed by users using as little energy as possible. To this end, an innovative discretization and classification technique is presented based on the χ2 distribution. Furthermore, the entire recognition process is executed on the smartphone, which determines not only the activity performed, but also the frequency at which it is carried out. These techniques and the new classification system presented reduce energy consumption caused by the activity monitoring system. The energy saved increases smartphone usage time to more than 27 h without recharging while maintaining accuracy. PMID:25742171

  12. Teaching Sight Word Recognition to Preschoolers with Delays Using Activity-Based Intervention and Didactic Instruction: A Comparison Study

    ERIC Educational Resources Information Center

    Hong, Sung-Jin; Kemp, Coral

    2007-01-01

    An alternating treatments design was used to compare the effectiveness of activity-based intervention and didactic instruction to teach sight word reading to four young children with developmental delays attending an inclusive child care centre. Following the collection of baseline measures, the two interventions, counterbalanced for word lists…

  13. Ear recognition based on Gabor features and KFDA.

    PubMed

    Yuan, Li; Mu, Zhichun

    2014-01-01

    We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA) is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets) and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach.

  14. A Study of Web-Based Oral Activities Enhanced by Automatic Speech Recognition for EFL College Learning

    ERIC Educational Resources Information Center

    Chiu, Tsuo-Lin; Liou, Hsien-Chin; Yeh, Yuli

    2007-01-01

    Recently, a promising topic in computer-assisted language learning is the application of Automatic Speech Recognition (ASR) technology for assisting learners to engage in meaningful speech interactions. Simulated real-life conversation supported by the application of ASR has been suggested as helpful for speaking. In this study, a web-based…

  15. Fusion of smartphone motion sensors for physical activity recognition.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2014-06-10

    For physical activity recognition, smartphone sensors, such as an accelerometer and a gyroscope, are being utilized in many research studies. So far, particularly, the accelerometer has been extensively studied. In a few recent studies, a combination of a gyroscope, a magnetometer (in a supporting role) and an accelerometer (in a lead role) has been used with the aim to improve the recognition performance. How and when are various motion sensors, which are available on a smartphone, best used for better recognition performance, either individually or in combination? This is yet to be explored. In order to investigate this question, in this paper, we explore how these various motion sensors behave in different situations in the activity recognition process. For this purpose, we designed a data collection experiment where ten participants performed seven different activities carrying smart phones at different positions. Based on the analysis of this data set, we show that these sensors, except the magnetometer, are each capable of taking the lead roles individually, depending on the type of activity being recognized, the body position, the used data features and the classification method employed (personalized or generalized). We also show that their combination only improves the overall recognition performance when their individual performances are not very high, so that there is room for performance improvement. We have made our data set and our data collection application publicly available, thereby making our experiments reproducible.

  16. Hand gesture recognition based on surface electromyography.

    PubMed

    Samadani, Ali-Akbar; Kulic, Dana

    2014-01-01

    Human hands are the most dexterous of human limbs and hand gestures play an important role in non-verbal communication. Underlying electromyograms associated with hand gestures provide a wealth of information based on which varying hand gestures can be recognized. This paper develops an inter-individual hand gesture recognition model based on Hidden Markov models that receives surface electromyography (sEMG) signals as inputs and predicts a corresponding hand gesture. The developed recognition model is tested with a dataset of 10 various hand gestures performed by 25 subjects in a leave-one-subject-out cross validation and an inter-individual recognition rate of 79% was achieved. The promising recognition rate demonstrates the efficacy of the proposed approach for discriminating between gesture-specific sEMG signals and could inform the design of sEMG-controlled prostheses and assistive devices. PMID:25570917

  17. Human Activity Recognition in AAL Environments Using Random Projections

    PubMed Central

    Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin

    2016-01-01

    Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented. PMID:27413392

  18. Human Activity Recognition in AAL Environments Using Random Projections.

    PubMed

    Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin

    2016-01-01

    Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented. PMID:27413392

  19. Recognition of Short Time-Paired Activities

    NASA Astrophysics Data System (ADS)

    Chaminda, Hapugahage Thilak; Klyuev, Vitaly; Naruse, Keitaro; Osano, Minetada

    We undertake numerous activities in our daily life and for some of those we forget to complete the action as originally intended. Significant aspects while performing most of these actions might be: “pairing of both hands simultaneously” and “short time consumption”. In this work an attempt has been made to recognize those kinds of Paired Activities (PAs), which are easy to forget, and to provide a method to remind about uncompleted PAs. To represent PAs, a study was done on opening and closing of various bottles. A model to define PAs, which simulated the paired behavior of both hands, is proposed, called “Paired Activity Model” (PAM). To recognize PAs using PAM, Paired Activity Recognition Algorithm (PARA) was implemented. Paired motion capturing was done by accelerometers, which were worn by subjects on the wrist areas of both hands. Individual and correlative behavior of both hands was used to recognize exact PA among other activities. Artificial Neural Network (ANN) algorithm was used for data categorization in PARA. ANN significantly outperformed the support vector machine algorithm in real time evaluations. In the user-independent case, PARA achieved recognition rates of 96% for only target PAs and 91% for target PAs undertaken amidst unrelated activities.

  20. Manifold based methods in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Xie, Kun

    2013-07-01

    This paper describes a novel method for facial expression recognition based on non-linear manifold techniques. The graph-based algorithms are designed to treat structure in data, and regularize accordingly. This same goal is shared by several other algorithms, from linear method principal components analysis (PCA) to modern variants such as Laplacian eigenmaps. In this paper we focus on manifold learning for dimensionality reduction and clustering using Laplacian eigenmaps for facial expression recognition. We evaluate the algorithm by using all the pixels and selected features respectively and compare the performance of the proposed non-linear manifold method with the previous linear manifold approach, and the non linear method produces higher recognition rate than the facial expression representation using linear methods.

  1. Supramolecular polymers constructed by crown ether-based molecular recognition.

    PubMed

    Zheng, Bo; Wang, Feng; Dong, Shengyi; Huang, Feihe

    2012-03-01

    Supramolecular polymers, polymeric systems beyond the molecule, have attracted more and more attention from scientists due to their applications in various fields, including stimuli-responsive materials, healable materials, and drug delivery. Due to their good selectivity and convenient enviro-responsiveness, crown ether-based molecular recognition motifs have been actively employed to fabricate supramolecular polymers with interesting properties and novel applications in recent years. In this tutorial review, we classify supramolecular polymers based on their differences in topology and cover recent advances in the marriage between crown ether-based molecular recognition and polymer science.

  2. Robust Indoor Human Activity Recognition Using Wireless Signals

    PubMed Central

    Wang, Yi; Jiang, Xinli; Cao, Rongyu; Wang, Xiyang

    2015-01-01

    Wireless signals–based activity detection and recognition technology may be complementary to the existing vision-based methods, especially under the circumstance of occlusions, viewpoint change, complex background, lighting condition change, and so on. This paper explores the properties of the channel state information (CSI) of Wi-Fi signals, and presents a robust indoor daily human activity recognition framework with only one pair of transmission points (TP) and access points (AP). First of all, some indoor human actions are selected as primitive actions forming a training set. Then, an online filtering method is designed to make actions’ CSI curves smooth and allow them to contain enough pattern information. Each primitive action pattern can be segmented from the outliers of its multi-input multi-output (MIMO) signals by a proposed segmentation method. Lastly, in online activities recognition, by selecting proper features and Support Vector Machine (SVM) based multi-classification, activities constituted by primitive actions can be recognized insensitive to the locations, orientations, and speeds. PMID:26184231

  3. Learning person-person interaction in collective activity recognition.

    PubMed

    Chang, Xiaobin; Zheng, Wei-Shi; Zhang, Jianguo

    2015-06-01

    Collective activity is a collection of atomic activities (individual person's activity) and can hardly be distinguished by an atomic activity in isolation. The interactions among people are important cues for recognizing collective activity. In this paper, we concentrate on modeling the person-person interactions for collective activity recognition. Rather than relying on hand-craft description of the person-person interaction, we propose a novel learning-based approach that is capable of computing the class-specific person-person interaction patterns. In particular, we model each class of collective activity by an interaction matrix, which is designed to measure the connection between any pair of atomic activities in a collective activity instance. We then formulate an interaction response (IR) model by assembling all these measurements and make the IR class specific and distinct from each other. A multitask IR is further proposed to jointly learn different person-person interaction patterns simultaneously in order to learn the relation between different person-person interactions and keep more distinct activity-specific factor for each interaction at the same time. Our model is able to exploit discriminative low-rank representation of person-person interaction. Experimental results on two challenging data sets demonstrate our proposed model is comparable with the state-of-the-art models and show that learning person-person interactions plays a critical role in collective activity recognition. PMID:25769156

  4. Average Gait Differential Image Based Human Recognition

    PubMed Central

    Chen, Jinyan; Liu, Jiansheng

    2014-01-01

    The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI) is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI), AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA) is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition. PMID:24895648

  5. Cross-person activity recognition using reduced kernel extreme learning machine.

    PubMed

    Deng, Wan-Yu; Zheng, Qing-Hua; Wang, Zhong-Min

    2014-05-01

    Activity recognition based on mobile embedded accelerometer is very important for developing human-centric pervasive applications such as healthcare, personalized recommendation and so on. However, the distribution of accelerometer data is heavily affected by varying users. The performance will degrade when the model trained on one person is used to others. To solve this problem, we propose a fast and accurate cross-person activity recognition model, known as TransRKELM (Transfer learning Reduced Kernel Extreme Learning Machine) which uses RKELM (Reduced Kernel Extreme Learning Machine) to realize initial activity recognition model. In the online phase OS-RKELM (Online Sequential Reduced Kernel Extreme Learning Machine) is applied to update the initial model and adapt the recognition model to new device users based on recognition results with high confidence level efficiently. Experimental results show that, the proposed model can adapt the classifier to new device users quickly and obtain good recognition performance.

  6. Cross-person activity recognition using reduced kernel extreme learning machine.

    PubMed

    Deng, Wan-Yu; Zheng, Qing-Hua; Wang, Zhong-Min

    2014-05-01

    Activity recognition based on mobile embedded accelerometer is very important for developing human-centric pervasive applications such as healthcare, personalized recommendation and so on. However, the distribution of accelerometer data is heavily affected by varying users. The performance will degrade when the model trained on one person is used to others. To solve this problem, we propose a fast and accurate cross-person activity recognition model, known as TransRKELM (Transfer learning Reduced Kernel Extreme Learning Machine) which uses RKELM (Reduced Kernel Extreme Learning Machine) to realize initial activity recognition model. In the online phase OS-RKELM (Online Sequential Reduced Kernel Extreme Learning Machine) is applied to update the initial model and adapt the recognition model to new device users based on recognition results with high confidence level efficiently. Experimental results show that, the proposed model can adapt the classifier to new device users quickly and obtain good recognition performance. PMID:24513850

  7. Optical correlation recognition based on LCOS

    NASA Astrophysics Data System (ADS)

    Tang, Mingchuan; Wu, Jianhong

    2013-08-01

    Vander-Lugt correlator[1] plays an important role in optical pattern recognition due to the characteristics of accurate positioning and high signal-to-noise ratio. The ideal Vander-Lugt correlator should have the ability of outputting strong and sharp correlation peak in allusion to the true target, in the existing Spatial Light Modulators[2], Liquid Crystal On Silicon(LCOS) has been the most competitive candidate for the matched filter owing to the continuous phase modulation peculiarity. Allowing for the distortions of the target to be identified including rotations, scaling changes, perspective changes, which can severely impact the correlation recognition results, herein, we present a modified Vander-Lugt correlator based on the LCOS by means of applying an iterative algorithm to the design of the filter so that the correlator can invariant to the distortions while maintaining good performance. The results of numerical simulation demonstrate that the filter could get the similar recognition results for all the training images. And the experiment shows that the modified correlator achieves the 180° rotating tolerance significantly improving the recognition efficiency of the correlator.

  8. Simplified Pattern Recognition Based On Multiaperture Optics

    NASA Astrophysics Data System (ADS)

    Schneider, Richard T.; Lin, Shih-Chao

    1987-05-01

    Multiaperture optics systems are similar in design to the concepts applying to the insect eye. Digitizing at the detector level is inherent in these systems. The fact that each eyelet forms one pixel of the overall image lends itself to optical preprocessing. There-fore a simplified pattern recognition scheme can be used in connection with multiaperture optics systems. The pattern recognition system used is based on the conjecture that all shapes encountered can be dissected into a set of rectangles. This is accomplished by creating a binary image and comparing each row of numbers starting at the top of the frame with the next row below. A set of rules is established which decides if the binary ones of the next row are to be incorporated in the present rectangle or start a new rectangle. The number and aspect ratios of the rectangles formed constitute a recognition code. These codes are kept and updated in a library. Since the same shape may give rise to different recognition codes depending on the attitude of the shape in respect to the detector grid, all shapes are rotated and normalized prior to dissecting. The rule is that the pattern is turned to maximize the number of straight edges which line up with the detector grid. The mathematical mechanism for rotation of the shape is described. Assuming a-priori knowledge of the size of the object exists, the normalization procedure can be used for distance determination. The description of the hardware for acquisition of the image is provided.

  9. Gait recognition based on Kinect sensor

    NASA Astrophysics Data System (ADS)

    Ahmed, Mohammed; Al-Jawad, Naseer; Sabir, Azhin T.

    2014-05-01

    This paper presents gait recognition based on human skeleton and trajectory of joint points captured by Microsoft Kinect sensor. In this paper Two sets of dynamic features are extracted during one gait cycle: the first is Horizontal Distance Features (HDF) that is based on the distances between (Ankles, knees, hands, shoulders), the second set is the Vertical Distance Features (VDF) that provide significant information of human gait extracted from the height to the ground of (hand, shoulder, and ankles) during one gait cycle. Extracting these two sets of feature are difficult and not accurate based on using traditional camera, therefore the Kinect sensor is used in this paper to determine the precise measurements. The two sets of feature are separately tested and then fused to create one feature vector. A database has been created in house to perform our experiments. This database consists of sixteen males and four females. For each individual, 10 videos have been recorded, each record includes in average two gait cycles. The Kinect sensor is used here to extract all the skeleton points, and these points are used to build up the feature vectors mentioned above. K-nearest neighbor is used as the classification method based on Cityblock distance function. Based on the experimental result the proposed method provides 56% as a recognition rate using HDF, while VDF provided 83.5% recognition accuracy. When fusing both of the HDF and VDF as one feature vector, the recognition rate increased to 92%, the experimental result shows that our method provides significant result compared to the existence methods.

  10. Rate-invariant recognition of humans and their activities.

    PubMed

    Veeraraghavan, Ashok; Srivastava, Anuj; Roy-Chowdhury, Amit K; Chellappa, Rama

    2009-06-01

    Pattern recognition in video is a challenging task because of the multitude of spatio-temporal variations that occur in different videos capturing the exact same event. While traditional pattern-theoretic approaches account for the spatial changes that occur due to lighting and pose, very little has been done to address the effect of temporal rate changes in the executions of an event. In this paper, we provide a systematic model-based approach to learn the nature of such temporal variations (time warps) while simultaneously allowing for the spatial variations in the descriptors. We illustrate our approach for the problem of action recognition and provide experimental justification for the importance of accounting for rate variations in action recognition. The model is composed of a nominal activity trajectory and a function space capturing the probability distribution of activity-specific time warping transformations. We use the square-root parameterization of time warps to derive geodesics, distance measures, and probability distributions on the space of time warping functions. We then design a Bayesian algorithm which treats the execution rate function as a nuisance variable and integrates it out using Monte Carlo sampling, to generate estimates of class posteriors. This approach allows us to learn the space of time warps for each activity while simultaneously capturing other intra- and interclass variations. Next, we discuss a special case of this approach which assumes a uniform distribution on the space of time warping functions and show how computationally efficient inference algorithms may be derived for this special case. We discuss the relative advantages and disadvantages of both approaches and show their efficacy using experiments on gait-based person identification and activity recognition. PMID:19398409

  11. Laptop Computer - Based Facial Recognition System Assessment

    SciTech Connect

    R. A. Cain; G. B. Singleton

    2001-03-01

    The objective of this project was to assess the performance of the leading commercial-off-the-shelf (COTS) facial recognition software package when used as a laptop application. We performed the assessment to determine the system's usefulness for enrolling facial images in a database from remote locations and conducting real-time searches against a database of previously enrolled images. The assessment involved creating a database of 40 images and conducting 2 series of tests to determine the product's ability to recognize and match subject faces under varying conditions. This report describes the test results and includes a description of the factors affecting the results. After an extensive market survey, we selected Visionics' FaceIt{reg_sign} software package for evaluation and a review of the Facial Recognition Vendor Test 2000 (FRVT 2000). This test was co-sponsored by the US Department of Defense (DOD) Counterdrug Technology Development Program Office, the National Institute of Justice, and the Defense Advanced Research Projects Agency (DARPA). Administered in May-June 2000, the FRVT 2000 assessed the capabilities of facial recognition systems that were currently available for purchase on the US market. Our selection of this Visionics product does not indicate that it is the ''best'' facial recognition software package for all uses. It was the most appropriate package based on the specific applications and requirements for this specific application. In this assessment, the system configuration was evaluated for effectiveness in identifying individuals by searching for facial images captured from video displays against those stored in a facial image database. An additional criterion was that the system be capable of operating discretely. For this application, an operational facial recognition system would consist of one central computer hosting the master image database with multiple standalone systems configured with duplicates of the master operating in

  12. Multimodal Physical Activity Recognition by Fusing Temporal and Cepstral Information

    PubMed Central

    Li, Ming; Rozgić, Viktor; Thatte, Gautam; Lee, Sangwon; Emken, Adar; Annavaram, Murali; Mitra, Urbashi; Spruijt-Metz, Donna; Narayanan, Shrikanth

    2015-01-01

    A physical activity (PA) recognition algorithm for a wearable wireless sensor network using both ambulatory electrocardiogram (ECG) and accelerometer signals is proposed. First, in the time domain, the cardiac activity mean and the motion artifact noise of the ECG signal are modeled by a Hermite polynomial expansion and principal component analysis, respectively. A set of time domain accelerometer features is also extracted. A support vector machine (SVM) is employed for supervised classification using these time domain features. Second, motivated by their potential for handling convolutional noise, cepstral features extracted from ECG and accelerometer signals based on a frame level analysis are modeled using Gaussian mixture models (GMMs). Third, to reduce the dimension of the tri-axial accelerometer cepstral features which are concatenated and fused at the feature level, heteroscedastic linear discriminant analysis is performed. Finally, to improve the overall recognition performance, fusion of the multi-modal (ECG and accelerometer) and multidomain (time domain SVM and cepstral domain GMM) subsystems at the score level is performed. The classification accuracy ranges from 79.3% to 97.3% for various testing scenarios and outperforms the state-of-the-art single accelerometer based PA recognition system by over 24% relative error reduction on our nine-category PA database. PMID:20699202

  13. Photoswitchable gel assembly based on molecular recognition.

    PubMed

    Yamaguchi, Hiroyasu; Kobayashi, Yuichiro; Kobayashi, Ryosuke; Takashima, Yoshinori; Hashidzume, Akihito; Harada, Akira

    2012-01-03

    The formation of effective and precise linkages in bottom-up or top-down processes is important for the development of self-assembled materials. Self-assembly through molecular recognition events is a powerful tool for producing functionalized materials. Photoresponsive molecular recognition systems can permit the creation of photoregulated self-assembled macroscopic objects. Here we demonstrate that macroscopic gel assembly can be highly regulated through photoisomerization of an azobenzene moiety that interacts differently with two host molecules. A photoregulated gel assembly system is developed using polyacrylamide-based hydrogels functionalized with azobenzene (guest) or cyclodextrin (host) moieties. Reversible adhesion and dissociation of the host gel from the guest gel may be controlled by photoirradiation. The differential affinities of α-cyclodextrin or β-cyclodextrin for the trans-azobenzene and cis-azobenzene are employed in the construction of a photoswitchable gel assembly system.

  14. Photoswitchable gel assembly based on molecular recognition

    PubMed Central

    Yamaguchi, Hiroyasu; Kobayashi, Yuichiro; Kobayashi, Ryosuke; Takashima, Yoshinori; Hashidzume, Akihito; Harada, Akira

    2012-01-01

    The formation of effective and precise linkages in bottom-up or top-down processes is important for the development of self-assembled materials. Self-assembly through molecular recognition events is a powerful tool for producing functionalized materials. Photoresponsive molecular recognition systems can permit the creation of photoregulated self-assembled macroscopic objects. Here we demonstrate that macroscopic gel assembly can be highly regulated through photoisomerization of an azobenzene moiety that interacts differently with two host molecules. A photoregulated gel assembly system is developed using polyacrylamide-based hydrogels functionalized with azobenzene (guest) or cyclodextrin (host) moieties. Reversible adhesion and dissociation of the host gel from the guest gel may be controlled by photoirradiation. The differential affinities of α-cyclodextrin or β-cyclodextrin for the trans-azobenzene and cis-azobenzene are employed in the construction of a photoswitchable gel assembly system. PMID:22215078

  15. Photoswitchable gel assembly based on molecular recognition.

    PubMed

    Yamaguchi, Hiroyasu; Kobayashi, Yuichiro; Kobayashi, Ryosuke; Takashima, Yoshinori; Hashidzume, Akihito; Harada, Akira

    2012-01-01

    The formation of effective and precise linkages in bottom-up or top-down processes is important for the development of self-assembled materials. Self-assembly through molecular recognition events is a powerful tool for producing functionalized materials. Photoresponsive molecular recognition systems can permit the creation of photoregulated self-assembled macroscopic objects. Here we demonstrate that macroscopic gel assembly can be highly regulated through photoisomerization of an azobenzene moiety that interacts differently with two host molecules. A photoregulated gel assembly system is developed using polyacrylamide-based hydrogels functionalized with azobenzene (guest) or cyclodextrin (host) moieties. Reversible adhesion and dissociation of the host gel from the guest gel may be controlled by photoirradiation. The differential affinities of α-cyclodextrin or β-cyclodextrin for the trans-azobenzene and cis-azobenzene are employed in the construction of a photoswitchable gel assembly system. PMID:22215078

  16. Wavelet-based multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen

    2008-09-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  17. Eye movement analysis for activity recognition using electrooculography.

    PubMed

    Bulling, Andreas; Ward, Jamie A; Gellersen, Hans; Tröster, Gerhard

    2011-04-01

    In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals-saccades, fixations, and blinks-and propose a method for assessing repetitive patterns of eye movements. We then devise 90 different features based on these characteristics and select a subset of them using minimum redundancy maximum relevance (mRMR) feature selection. We validate the method using an eight participant study in an office environment using an example set of five activity classes: copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web. We also include periods with no specific activity (the NULL class). Using a support vector machine (SVM) classifier and person-independent (leave-one-person-out) training, we obtain an average precision of 76.1 percent and recall of 70.5 percent over all classes and participants. The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.

  18. Spectral face recognition using orthogonal subspace bases

    NASA Astrophysics Data System (ADS)

    Wimberly, Andrew; Robila, Stefan A.; Peplau, Tansy

    2010-04-01

    We present an efficient method for facial recognition using hyperspectral imaging and orthogonal subspaces. Projecting the data into orthogonal subspaces has the advantage of compactness and reduction of redundancy. We focus on two approaches: Principal Component Analysis and Orthogonal Subspace Projection. Our work is separated in three stages. First, we designed an experimental setup that allowed us to create a hyperspectral image database of 17 subjects under different facial expressions and viewing angles. Second, we investigated approaches to employ spectral information for the generation of fused grayscale images. Third, we designed and tested a recognition system based on the methods described above. The experimental results show that spectral fusion leads to improvement of recognition accuracy when compared to regular imaging. The work expands on previous band extraction research and has the distinct advantage of being one of the first that combines spatial information (i.e. face characteristics) with spectral information. In addition, the techniques are general enough to accommodate differences in skin spectra.

  19. Feature quality-based multimodal unconstrained eye recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Zhi; Du, Eliza Y.; Lin, Yong; Thomas, N. Luke; Belcher, Craig; Delp, Edward J.

    2013-05-01

    Iris recognition has been tested to the most accurate biometrics using high resolution near infrared images. However, it does not work well under visible wavelength illumination. Sclera recognition, however, has been shown to achieve reasonable recognition accuracy under visible wavelengths. Combining iris and sclera recognition together can achieve better recognition accuracy. However, image quality can significantly affect the recognition accuracy. Moreover, in unconstrained situations, the acquired eye images may not be frontally facing. In this research, we proposed a feature quality-based multimodal unconstrained eye recognition method that combine the respective strengths of iris recognition and sclera recognition for human identification and can work with frontal and off-angle eye images. The research results show that the proposed method is very promising.

  20. Remote Meta-C–H Activation Using a Pyridine-Based Template: Achieving Site-Selectivity via the Recognition of Distance and Geometry

    PubMed Central

    2015-01-01

    The pyridyl group has been extensively employed to direct transition-metal-catalyzed C–H activation reactions in the past half-century. The typical cyclic transition states involved in these cyclometalation processes have only enabled the activation of ortho-C–H bonds. Here, we report that pyridine is adapted to direct meta-C–H activation of benzyl and phenyl ethyl alcohols through engineering the distance and geometry of a directing template. This template takes advantage of a stronger σ-coordinating pyridine to recruit Pd catalysts to the desired site for functionalization. The U-shaped structure accommodates the otherwise highly strained cyclophane-like transition state. This development illustrates the potential of achieving site selectivity in C–H activation via the recognition of distal and geometric relationship between existing functional groups and multiple C–H bonds in organic molecules. PMID:27162997

  1. Novel biometrics based on nose pore recognition

    NASA Astrophysics Data System (ADS)

    Song, Shangling; Ohnuma, Kazuhiko; Liu, Zhi; Mei, Liangmo; Kawada, Akira; Monma, Tomoyuki

    2009-05-01

    We present a new member of the biometrics family-i.e., nose pores-which uses particularly interesting properties of nose pores as a basis for noninvasive biometric assessment. The pore distribution on the nose is stable and easily inspected. More important, nose pore distribution features are distinguishable between different persons. Thus, these features can be used for personal identification. However, little work has been done on nose pores as a biometric identifier. We have developed an end-to-end recognition system based on nose pore features. We also made use of a database of nose pore images obtained over a long period to examine the performance of nose pores as a biometric identifier. This research showed that the nose pore is a promising candidate for biometric identification and deserves further research. The experimental results based on the unique nose pores database demonstrated that nose pores can give an 88.07% correct recognition rate for biometric identification, which showed this biometric identifier's feasibility and effectiveness.

  2. Passive and active recognition of one's own face.

    PubMed

    Sugiura, M; Kawashima, R; Nakamura, K; Okada, K; Kato, T; Nakamura, A; Hatano, K; Itoh, K; Kojima, S; Fukuda, H

    2000-01-01

    Facial identity recognition has been studied mainly with explicit discrimination requirement and faces of social figures in previous human brain imaging studies. We performed a PET activation study with normal volunteers in facial identity recognition tasks using the subject's own face as visual stimulus. Three tasks were designed so that the activation of the visual representation of the face and the effect of sustained attention to the representation could be separately examined: a control-face recognition task (C), a passive own-face recognition task (no explicit discrimination was required) (P), and an active own-face recognition task (explicit discrimination was required) (A). Increased skin conductance responses during recognition of own face were seen in both task P and task A, suggesting the occurrence of psychophysiological changes during recognition of one's own face. The left fusiform gyrus, the right supramarginal gyrus, the left putamen, and the right hypothalamus were activated in tasks P and A compared with task C. The left fusiform gyrus and the right supramarginal gyrus are considered to be involved in the representation of one's own face. The activation in the right supramarginal gyrus may be associated with the representation of one's own face as a part of one's own body. The prefrontal cortices, the right anterior cingulate, the right presupplementary motor area, and the left insula were specifically activated during task A compared with tasks C and P, indicating that these regions may be involved in the sustained attention to the representation of one's own face. PMID:10686115

  3. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  4. Accelerometer's position independent physical activity recognition system for long-term activity monitoring in the elderly.

    PubMed

    Khan, Adil Mehmood; Lee, Young-Koo; Lee, Sungyoung; Kim, Tae-Seong

    2010-12-01

    Mobility is a good indicator of health status and thus objective mobility data could be used to assess the health status of elderly patients. Accelerometry has emerged as an effective means for long-term physical activity monitoring in the elderly. However, the output of an accelerometer varies at different positions on a subject's body, even for the same activity, resulting in high within-class variance. Existing accelerometer-based activity recognition systems thus require firm attachment of the sensor to a subject's body. This requirement makes them impractical for long-term activity monitoring during unsupervised free-living as it forces subjects into a fixed life pattern and impede their daily activities. Therefore, we introduce a novel single-triaxial-accelerometer-based activity recognition system that reduces the high within-class variance significantly and allows subjects to carry the sensor freely in any pocket without its firm attachment. We validated our system using seven activities: resting (lying/sitting/standing), walking, walking-upstairs, walking-downstairs, running, cycling, and vacuuming, recorded from five positions: chest pocket, front left trousers pocket, front right trousers pocket, rear trousers pocket, and inner jacket pocket. Its simplicity, ability to perform activities unimpeded, and an average recognition accuracy of 94% make our system a practical solution for continuous long-term activity monitoring in the elderly.

  5. Sum Product Networks for Activity Recognition.

    PubMed

    Amer, Mohamed R; Todorovic, Sinisa

    2016-04-01

    This paper addresses detection and localization of human activities in videos. We focus on activities that may have variable spatiotemporal arrangements of parts, and numbers of actors. Such activities are represented by a sum-product network (SPN). A product node in SPN represents a particular arrangement of parts, and a sum node represents alternative arrangements. The sums and products are hierarchically organized, and grounded onto space-time windows covering the video. The windows provide evidence about the activity classes based on the Counting Grid (CG) model of visual words. This evidence is propagated bottom-up and top-down to parse the SPN graph for the explanation of the video. The node connectivity and model parameters of SPN and CG are jointly learned under two settings, weakly supervised, and supervised. For evaluation, we use our new Volleyball dataset, along with the benchmark datasets VIRAT, UT-Interactions, KTH, and TRECVID MED 2011. Our video classification and activity localization are superior to those of the state of the art on these datasets.

  6. A survey of online activity recognition using mobile phones.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2015-01-19

    Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research.

  7. A Survey of Online Activity Recognition Using Mobile Phones

    PubMed Central

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J.M.

    2015-01-01

    Physical activity recognition using embedded sensors has enabled many context-aware applications in different areas, such as healthcare. Initially, one or more dedicated wearable sensors were used for such applications. However, recently, many researchers started using mobile phones for this purpose, since these ubiquitous devices are equipped with various sensors, ranging from accelerometers to magnetic field sensors. In most of the current studies, sensor data collected for activity recognition are analyzed offline using machine learning tools. However, there is now a trend towards implementing activity recognition systems on these devices in an online manner, since modern mobile phones have become more powerful in terms of available resources, such as CPU, memory and battery. The research on offline activity recognition has been reviewed in several earlier studies in detail. However, work done on online activity recognition is still in its infancy and is yet to be reviewed. In this paper, we review the studies done so far that implement activity recognition systems on mobile phones and use only their on-board sensors. We discuss various aspects of these studies. Moreover, we discuss their limitations and present various recommendations for future research. PMID:25608213

  8. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  9. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  10. Tracking and activity recognition through consensus in distributed camera networks.

    PubMed

    Song, Bi; Kamal, Ahmed T; Soto, Cristian; Ding, Chong; Farrell, Jay A; Roy-Chowdhury, Amit K

    2010-10-01

    Camera networks are being deployed for various applications like security and surveillance, disaster response and environmental modeling. However, there is little automated processing of the data. Moreover, most methods for multicamera analysis are centralized schemes that require the data to be present at a central server. In many applications, this is prohibitively expensive, both technically and economically. In this paper, we investigate distributed scene analysis algorithms by leveraging upon concepts of consensus that have been studied in the context of multiagent systems, but have had little applications in video analysis. Each camera estimates certain parameters based upon its own sensed data which is then shared locally with the neighboring cameras in an iterative fashion, and a final estimate is arrived at in the network using consensus algorithms. We specifically focus on two basic problems-tracking and activity recognition. For multitarget tracking in a distributed camera network, we show how the Kalman-Consensus algorithm can be adapted to take into account the directional nature of video sensors and the network topology. For the activity recognition problem, we derive a probabilistic consensus scheme that combines the similarity scores of neighboring cameras to come up with a probability for each action at the network level. Thorough experimental results are shown on real data along with a quantitative analysis.

  11. N-methyl-D-aspartate recognition site ligands modulate activity at the coupled glycine recognition site.

    PubMed

    Hood, W F; Compton, R P; Monahan, J B

    1990-03-01

    In synaptic plasma membranes from rat forebrain, the potencies of glycine recognition site agonists and antagonists for modulating [3H]1-[1-(2-thienyl)cyclohexyl]piperidine ([3H]TCP) binding and for displacing strychnine-insensitive [3H]glycine binding are altered in the presence of N-methyl-D-aspartate (NMDA) recognition site ligands. The NMDA competitive antagonist, cis-4-phosphonomethyl-2-piperidine carboxylate (CGS 19755), reduces [3H]glycine binding, and the reduction can be fully reversed by the NMDA recognition site agonist, L-glutamate. Scatchard analysis of [3H]glycine binding shows that in the presence of CGS 19755 there is no change in Bmax (8.81 vs. 8.79 pmol/mg of protein), but rather a decrease in the affinity of glycine (KD of 0.202 microM vs. 0.129 microM). Similar decreases in affinity are observed for the glycine site agonists, D-serine and 1-aminocyclopropane-1-carboxylate, in the presence of CGS 19755. In contrast, the affinity of glycine antagonists, 1-hydroxy-3-amino-2-pyrrolidone and 1-aminocyclobutane-1-carboxylate, at this [3H]glycine recognition site increases in the presence of CGS 19755. The functional consequence of this change in affinity was addressed using the modulation of [3H]TCP binding. In the presence of L-glutamate, the potency of glycine agonists for the stimulation of [3H]TCP binding increases, whereas the potency of glycine antagonists decreases. These data are consistent with NMDA recognition site ligands, through their interactions at the NMDA recognition site, modulating activity at the associated glycine recognition site.

  12. Face recognition based on fringe pattern analysis

    NASA Astrophysics Data System (ADS)

    Guo, Hong; Huang, Peisen

    2010-03-01

    Two-dimensional face-recognition techniques suffer from facial texture and illumination variations. Although 3-D techniques can overcome these limitations, the reconstruction and storage expenses of 3-D information are extremely high. We present a novel face-recognition method that directly utilizes 3-D information encoded in face fringe patterns without having to reconstruct 3-D geometry. In the proposed method, a digital video projector is employed to sequentially project three phase-shifted sinusoidal fringe patterns onto the subject's face. Meanwhile, a camera is used to capture the distorted fringe patterns from an offset angle. Afterward, the face fringe images are analyzed by the phase-shifting method and the Fourier transform method to obtain a spectral representation of the 3-D face. Finally, the eigenface algorithm is applied to the face-spectrum images to perform face recognition. Simulation and experimental results demonstrate that the proposed method achieved satisfactory recognition rates with reduced computational complexity and storage expenses.

  13. Random-profiles-based 3D face recognition system.

    PubMed

    Kim, Joongrock; Yu, Sunjin; Lee, Sangyoun

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  14. Random-Profiles-Based 3D Face Recognition System

    PubMed Central

    Joongrock, Kim; Sunjin, Yu; Sangyoun, Lee

    2014-01-01

    In this paper, a noble nonintrusive three-dimensional (3D) face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D) face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation. PMID:24691101

  15. Recognition mechanisms for schema-based knowledge representations

    SciTech Connect

    Havens, W.S.

    1983-01-01

    The author considers generalizing formal recognition methods from parsing theory to schemata knowledge representations. Within artificial intelligence, recognition tasks include aspects of natural language understanding, computer vision, episode understanding, speech recognition, and others. The notion of schemata as a suitable knowledge representation for these tasks is discussed. A number of problems with current schemata-based recognition systems are presented. To gain insight into alternative approaches, the formal context-free parsing method of earley is examined. It is shown to suggest a useful control structure model for integrating top-down and bottom-up search in schemata representations. 46 references.

  16. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  17. Robust facial expression recognition algorithm based on local metric learning

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  18. An Evaluation of PC-Based Optical Character Recognition Systems.

    ERIC Educational Resources Information Center

    Schreier, E. M.; Uslan, M. M.

    1991-01-01

    The review examines six personal computer-based optical character recognition (OCR) systems designed for use by blind and visually impaired people. Considered are OCR components and terms, documentation, scanning and reading, command structure, conversion, unique features, accuracy of recognition, scanning time, speed, and cost. (DB)

  19. Molecular Recognition: Detection of Colorless Compounds Based on Color Change

    ERIC Educational Resources Information Center

    Khalafi, Lida; Kashani, Samira; Karimi, Javad

    2016-01-01

    A laboratory experiment is described in which students measure the amount of cetirizine in allergy-treatment tablets based on molecular recognition. The basis of recognition is competition of cetirizine with phenolphthalein to form an inclusion complex with ß-cyclodextrin. Phenolphthalein is pinkish under basic condition, whereas it's complex form…

  20. Implementation study of wearable sensors for activity recognition systems

    PubMed Central

    Ghassemian, Mona

    2015-01-01

    This Letter investigates and reports on a number of activity recognition methods for a wearable sensor system. The authors apply three methods for data transmission, namely ‘stream-based’, ‘feature-based’ and ‘threshold-based’ scenarios to study the accuracy against energy efficiency of transmission and processing power that affects the mote's battery lifetime. They also report on the impact of variation of sampling frequency and data transmission rate on energy consumption of motes for each method. This study leads us to propose a cross-layer optimisation of an activity recognition system for provisioning acceptable levels of accuracy and energy efficiency. PMID:26609413

  1. Facial expression recognition based on improved DAGSVM

    NASA Astrophysics Data System (ADS)

    Luo, Yuan; Cui, Ye; Zhang, Yi

    2014-11-01

    For the cumulative error problem because of randomization sequence of traditional DAGSVM(Directed Acyclic Graph Support Vector Machine) classification, this paper presents an improved DAGSVM expression recognition method. The method uses the distance of class and the standard deviation as the measure of the classer, which minimize the error rate of the upper structure of the classification. At the same time, this paper uses the method which combines discrete cosine transform (Discrete Cosine Transform, DCT) with Local Binary Pattern(Local Binary Pattern - LBP) ,to extract expression feature and be the input to improve the DAGSVM classifier for recognition. Experimental results show that compared with other multi-class support vector machine method, improved DAGSVM classifier can achieve higher recognition rate. And when it's used at the platform of the intelligent wheelchair, experiments show that the method has a better robustness.

  2. Feature Selection in Classification of Eye Movements Using Electrooculography for Activity Recognition

    PubMed Central

    Mala, S.; Latha, K.

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185

  3. Experimental study on GMM-based speaker recognition

    NASA Astrophysics Data System (ADS)

    Ye, Wenxing; Wu, Dapeng; Nucci, Antonio

    2010-04-01

    Speaker recognition plays a very important role in the field of biometric security. In order to improve the recognition performance, many pattern recognition techniques have be explored in the literature. Among these techniques, the Gaussian Mixture Model (GMM) is proved to be an effective statistic model for speaker recognition and is used in most state-of-the-art speaker recognition systems. The GMM is used to represent the 'voice print' of a speaker through modeling the spectral characteristic of speech signals of the speaker. In this paper, we implement a speaker recognition system, which consists of preprocessing, Mel-Frequency Cepstrum Coefficients (MFCCs) based feature extraction, and GMM based classification. We test our system with TIDIGITS data set (325 speakers) and our own recordings of more than 200 speakers; our system achieves 100% correct recognition rate. Moreover, we also test our system under the scenario that training samples are from one language but test samples are from a different language; our system also achieves 100% correct recognition rate, which indicates that our system is language independent.

  4. Focus-of-attention for human activity recognition from UAVs

    NASA Astrophysics Data System (ADS)

    Burghouts, G. J.; van Eekeren, A. W. M.; Dijk, J.

    2014-10-01

    This paper presents a system to extract metadata about human activities from full-motion video recorded from a UAV. The pipeline consists of these components: tracking, motion features, representation of the tracks in terms of their motion features, and classification of each track as one of the human activities of interest. We consider these activities: walk, run, throw, dig, wave. Our contribution is that we show how a robust system can be constructed for human activity recognition from UAVs, and that focus-of-attention is needed. We find that tracking and human detection are essential for robust human activity recognition from UAVs. Without tracking, the human activity recognition deteriorates. The combination of tracking and human detection is needed to focus the attention on the relevant tracks. The best performing system includes tracking, human detection and a per-track analysis of the five human activities. This system achieves an average accuracy of 93%. A graphical user interface is proposed to aid the operator or analyst during the task of retrieving the relevant parts of video that contain particular human activities. Our demo is available on YouTube.

  5. Sparse representation based face recognition using weighted regions

    NASA Astrophysics Data System (ADS)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  6. Human motion recognition based on features and models selected HMM

    NASA Astrophysics Data System (ADS)

    Lu, Haixiang; Zhou, Hongjun

    2015-03-01

    This paper research on the motion recognition based on HMM with Kinect. Kinect provides skeletal data consist of 3D body joints with its lower price and convenience. In this work, several methods are used to determine the optimal subset of features among Cartesian coordinates, distance to hip center, velocity, angle and angular velocity, in order to improve the recognition rate. K-means is used for vector quantization and HMM is used as recognition method. HMM is an effective signal processing method which contains time calibration, provides a learning mechanism and recognition ability. Cluster numbers of K-means, structure and state numbers of HMM are optimized as well. The proposed methods are applied to the MSR Action3D dataset. Results show that the proposed methods obtain better recognition accuracy than the state of the art methods.

  7. Egocentric daily activity recognition via multitask clustering.

    PubMed

    Yan, Yan; Ricci, Elisa; Liu, Gaowen; Sebe, Nicu

    2015-10-01

    Recognizing human activities from videos is a fundamental research problem in computer vision. Recently, there has been a growing interest in analyzing human behavior from data collected with wearable cameras. First-person cameras continuously record several hours of their wearers' life. To cope with this vast amount of unlabeled and heterogeneous data, novel algorithmic solutions are required. In this paper, we propose a multitask clustering framework for activity of daily living analysis from visual data gathered from wearable cameras. Our intuition is that, even if the data are not annotated, it is possible to exploit the fact that the tasks of recognizing everyday activities of multiple individuals are related, since typically people perform the same actions in similar environments, e.g., people working in an office often read and write documents). In our framework, rather than clustering data from different users separately, we propose to look for clustering partitions which are coherent among related tasks. In particular, two novel multitask clustering algorithms, derived from a common optimization problem, are introduced. Our experimental evaluation, conducted both on synthetic data and on publicly available first-person vision data sets, shows that the proposed approach outperforms several single-task and multitask learning methods. PMID:26067371

  8. Adaptive Activity and Environment Recognition for Mobile Phones

    PubMed Central

    Parviainen, Jussi; Bojja, Jayaprasad; Collin, Jussi; Leppänen, Jussi; Eronen, Antti

    2014-01-01

    In this paper, an adaptive activity and environment recognition algorithm running on a mobile phone is presented. The algorithm makes inferences based on sensor and radio receiver data provided by the phone. A wide set of features that can be extracted from these data sources were investigated, and a Bayesian maximum a posteriori classifier was used for classifying between several user activities and environments. The accuracy of the method was evaluated on a dataset collected in a real-life trial. In addition, comparison to other state-of-the-art classifiers, namely support vector machines and decision trees, was performed. To make the system adaptive for individual user characteristics, an adaptation algorithm for context model parameters was designed. Moreover, a confidence measure for the classification correctness was designed. The proposed adaptation algorithm and confidence measure were evaluated on a second dataset obtained from another real-life trial, where the users were requested to provide binary feedback on the classification correctness. The results show that the proposed adaptation algorithm is effective at improving the classification accuracy. PMID:25372620

  9. Adaptive activity and environment recognition for mobile phones.

    PubMed

    Parviainen, Jussi; Bojja, Jayaprasad; Collin, Jussi; Leppänen, Jussi; Eronen, Antti

    2014-11-03

    In this paper, an adaptive activity and environment recognition algorithm running on a mobile phone is presented. The algorithm makes inferences based on sensor and radio receiver data provided by the phone. A wide set of features that can be extracted from these data sources were investigated, and a Bayesian maximum a posteriori classifier was used for classifying between several user activities and environments. The accuracy of the method was evaluated on a dataset collected in a real-life trial. In addition, comparison to other state-of-the-art classifiers, namely support vector machines and decision trees, was performed. To make the system adaptive for individual user characteristics, an adaptation algorithm for context model parameters was designed. Moreover, a confidence measure for the classification correctness was designed. The proposed adaptation algorithm and confidence measure were evaluated on a second dataset obtained from another real-life trial, where the users were requested to provide binary feedback on the classification correctness. The results show that the proposed adaptation algorithm is effective at improving the classification accuracy.

  10. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  11. 3D face recognition based on matching of facial surfaces

    NASA Astrophysics Data System (ADS)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  12. Automatic emotion recognition based on body movement analysis: a survey.

    PubMed

    Zacharatos, Haris; Gatzoulis, Christos; Chrysanthou, Yiorgos L

    2014-01-01

    Humans are emotional beings, and their feelings influence how they perform and interact with computers. One of the most expressive modalities for humans is body posture and movement, which researchers have recently started exploiting for emotion recognition. This survey describes emerging techniques and modalities related to emotion recognition based on body movement, as well as recent advances in automatic emotion recognition. It also describes application areas and notation systems and explains the importance of movement segmentation. It then discusses unsolved problems and provides promising directions for future research. The Web extra (a PDF file) contains tables with additional information related to the article. PMID:25216477

  13. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  14. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.

    PubMed

    Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea

    2016-09-01

    A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment. PMID:27468841

  15. On the Design of Smart Homes: A Framework for Activity Recognition in Home Environment.

    PubMed

    Cicirelli, Franco; Fortino, Giancarlo; Giordano, Andrea; Guerrieri, Antonio; Spezzano, Giandomenico; Vinci, Andrea

    2016-09-01

    A smart home is a home environment enriched with sensing, actuation, communication and computation capabilities which permits to adapt it to inhabitants preferences and requirements. Establishing a proper strategy of actuation on the home environment can require complex computational tasks on the sensed data. This is the case of activity recognition, which consists in retrieving high-level knowledge about what occurs in the home environment and about the behaviour of the inhabitants. The inherent complexity of this application domain asks for tools able to properly support the design and implementation phases. This paper proposes a framework for the design and implementation of smart home applications focused on activity recognition in home environments. The framework mainly relies on the Cloud-assisted Agent-based Smart home Environment (CASE) architecture offering basic abstraction entities which easily allow to design and implement Smart Home applications. CASE is a three layered architecture which exploits the distributed multi-agent paradigm and the cloud technology for offering analytics services. Details about how to implement activity recognition onto the CASE architecture are supplied focusing on the low-level technological issues as well as the algorithms and the methodologies useful for the activity recognition. The effectiveness of the framework is shown through a case study consisting of a daily activity recognition of a person in a home environment.

  16. Pattern recognition tool based on complex network-based approach

    NASA Astrophysics Data System (ADS)

    Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir

    2013-02-01

    This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.

  17. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  18. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition

    PubMed Central

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition. PMID:25942404

  19. Skeleton-based human action recognition using multiple sequence alignment

    NASA Astrophysics Data System (ADS)

    Ding, Wenwen; Liu, Kai; Cheng, Fei; Zhang, Jin; Li, YunSong

    2015-05-01

    Human action recognition and analysis is an active research topic in computer vision for many years. This paper presents a method to represent human actions based on trajectories consisting of 3D joint positions. This method first decompose action into a sequence of meaningful atomic actions (actionlets), and then label actionlets with English alphabets according to the Davies-Bouldin index value. Therefore, an action can be represented using a sequence of actionlet symbols, which will preserve the temporal order of occurrence of each of the actionlets. Finally, we employ sequence comparison to classify multiple actions through using string matching algorithms (Needleman-Wunsch). The effectiveness of the proposed method is evaluated on datasets captured by commodity depth cameras. Experiments of the proposed method on three challenging 3D action datasets show promising results.

  20. An Activation-Verification Model for Letter and Word Recognition: The Word-Superiority Effect.

    ERIC Educational Resources Information Center

    Paap, Kenneth R.; And Others

    1982-01-01

    An encoding algorithm uses empirically determined confusion matrices to activate units in an alphabetum and a lexicon to predict performance of word, orthographically regular nonword, or irregular nonword recognition. Performance is enhanced when decisions are based on lexical information which constrains test letter identity. Word prediction…

  1. Optimal Recognition Method of Human Activities Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Oniga, Stefan; József, Sütő

    2015-12-01

    The aim of this research is an exhaustive analysis of the various factors that may influence the recognition rate of the human activity using wearable sensors data. We made a total of 1674 simulations on a publically released human activity database by a group of researcher from the University of California at Berkeley. In a previous research, we analyzed the influence of the number of sensors and their placement. In the present research we have examined the influence of the number of sensor nodes, the type of sensor node, preprocessing algorithms, type of classifier and its parameters. The final purpose is to find the optimal setup for best recognition rates with lowest hardware and software costs.

  2. Cellular Phone Face Recognition System Based on Optical Phase Correlation

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Sayuri; Ohta, Maiko; Kodate, Kashiko

    We propose a high security facial recognition system using a cellular phone on the mobile network. This system is composed of a face recognition engine based on optical phase correlation which uses phase information with emphasis on a Fourier domain, a control sever and the cellular phone with a compact camera for taking pictures, as a portable terminal. Compared with various correlation methods, our face recognition engine revealed the most accurate EER of less than 1%. By using the JAVA interface on this system, we implemented the stable system taking pictures, providing functions to prevent spoofing while transferring images. This recognition system was tested on 300 women students and the results proved this system effective.

  3. Parallel computing-based sclera recognition for human identification

    NASA Astrophysics Data System (ADS)

    Lin, Yong; Du, Eliza Y.; Zhou, Zhi

    2012-06-01

    Compared to iris recognition, sclera recognition which uses line descriptor can achieve comparable recognition accuracy in visible wavelengths. However, this method is too time-consuming to be implemented in a real-time system. In this paper, we propose a GPU-based parallel computing approach to reduce the sclera recognition time. We define a new descriptor in which the information of KD tree structure and sclera edge are added. Registration and matching task is divided into subtasks in various sizes according to their computation complexities. Every affine transform parameters are generated by searching on KD tree. Texture memory, constant memory, and shared memory are used to store templates and transform matrixes. The experiment results show that the proposed method executed on GPU can dramatically improve the sclera matching speed in hundreds of times without accuracy decreasing.

  4. A recurrent dynamic model for correspondence-based face recognition.

    PubMed

    Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph

    2008-01-01

    Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems. PMID:19146266

  5. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  6. ERK Pathway Activation Bidirectionally Affects Visual Recognition Memory and Synaptic Plasticity in the Perirhinal Cortex

    PubMed Central

    Silingardi, Davide; Angelucci, Andrea; De Pasquale, Roberto; Borsotti, Marco; Squitieri, Giovanni; Brambilla, Riccardo; Putignano, Elena; Pizzorusso, Tommaso; Berardi, Nicoletta

    2011-01-01

    ERK 1,2 pathway mediates experience-dependent gene transcription in neurons and several studies have identified its pivotal role in experience-dependent synaptic plasticity and in forms of long term memory involving hippocampus, amygdala, or striatum. The perirhinal cortex (PRHC) plays an essential role in familiarity-based object recognition memory. It is still unknown whether ERK activation in PRHC is necessary for recognition memory consolidation. Most important, it is unknown whether by modulating the gain of the ERK pathway it is possible to bidirectionally affect visual recognition memory and PRHC synaptic plasticity. We have first pharmacologically blocked ERK activation in the PRHC of adult mice and found that this was sufficient to impair long term recognition memory in a familiarity-based task, the object recognition task (ORT). We have then tested performance in the ORT in Ras-GRF1 knock-out (KO) mice, which exhibit a reduced activation of ERK by neuronal activity, and in ERK1 KO mice, which have an increased activation of ERK2 and exhibit enhanced striatal plasticity and striatal mediated memory. We found that Ras-GRF1 KO mice have normal short term memory but display a long term memory deficit; memory reconsolidation is also impaired. On the contrary, ERK1 KO mice exhibit a better performance than WT mice at 72 h retention interval, suggesting a longer lasting recognition memory. In parallel with behavioral data, LTD was strongly reduced and LTP was significantly smaller in PRHC slices from Ras-GRF1 KO than in WT mice while enhanced LTP and LTD were found in PRHC slices from ERK1 KO mice. PMID:22232579

  7. A New Ligand-Based Method for Purifying Active Human Plasma-Derived Ficolin-3 Complexes Supports the Phenomenon of Crosstalk between Pattern-Recognition Molecules and Immunoglobulins

    PubMed Central

    Man-Kupisinska, Aleksandra; Michalski, Mateusz; Maciejewska, Anna; Swierzko, Anna S.; Cedzynski, Maciej; Lugowski, Czeslaw; Lukasiewicz, Jolanta

    2016-01-01

    Despite recombinant protein technology development, proteins isolated from natural sources remain important for structure and activity determination. Ficolins represent a class of proteins that are difficult to isolate. To date, three methods for purifying ficolin-3 from plasma/serum have been proposed, defined by most critical step: (i) hydroxyapatite absorption chromatography (ii) N-acetylated human serum albumin affinity chromatography and (iii) anti-ficolin-3 monoclonal antibody-based affinity chromatography. We present a new protocol for purifying ficolin-3 complexes from human plasma that is based on an exclusive ligand: the O-specific polysaccharide of Hafnia alvei PCM 1200 LPS (O-PS 1200). The protocol includes (i) poly(ethylene glycol) precipitation; (ii) yeast and l-fucose incubation, for depletion of mannose-binding lectin; (iii) affinity chromatography using O-PS 1200-Sepharose; (iv) size-exclusion chromatography. Application of this protocol yielded average 2.2 mg of ficolin-3 preparation free of mannose-binding lectin (MBL), ficolin-1 and -2 from 500 ml of plasma. The protein was complexed with MBL-associated serine proteases (MASPs) and was able to activate the complement in vitro. In-process monitoring of MBL, ficolins, and total protein content revealed the presence of difficult-to-remove immunoglobulin G, M and A, in some extent in agreement with recent findings suggesting crosstalk between IgG and ficolin-3. We demonstrated that recombinant ficolin-3 interacts with IgG and IgM in a concentration-dependent manner. Although this association does not appear to influence ficolin-3-ligand interactions in vitro, it may have numerous consequences in vivo. Thus our purification procedure provides Ig-ficolin-3/MASP complexes that might be useful for gaining further insight into the crosstalk and biological activity of ficolin-3. PMID:27232184

  8. A New Ligand-Based Method for Purifying Active Human Plasma-Derived Ficolin-3 Complexes Supports the Phenomenon of Crosstalk between Pattern-Recognition Molecules and Immunoglobulins.

    PubMed

    Man-Kupisinska, Aleksandra; Michalski, Mateusz; Maciejewska, Anna; Swierzko, Anna S; Cedzynski, Maciej; Lugowski, Czeslaw; Lukasiewicz, Jolanta

    2016-01-01

    Despite recombinant protein technology development, proteins isolated from natural sources remain important for structure and activity determination. Ficolins represent a class of proteins that are difficult to isolate. To date, three methods for purifying ficolin-3 from plasma/serum have been proposed, defined by most critical step: (i) hydroxyapatite absorption chromatography (ii) N-acetylated human serum albumin affinity chromatography and (iii) anti-ficolin-3 monoclonal antibody-based affinity chromatography. We present a new protocol for purifying ficolin-3 complexes from human plasma that is based on an exclusive ligand: the O-specific polysaccharide of Hafnia alvei PCM 1200 LPS (O-PS 1200). The protocol includes (i) poly(ethylene glycol) precipitation; (ii) yeast and l-fucose incubation, for depletion of mannose-binding lectin; (iii) affinity chromatography using O-PS 1200-Sepharose; (iv) size-exclusion chromatography. Application of this protocol yielded average 2.2 mg of ficolin-3 preparation free of mannose-binding lectin (MBL), ficolin-1 and -2 from 500 ml of plasma. The protein was complexed with MBL-associated serine proteases (MASPs) and was able to activate the complement in vitro. In-process monitoring of MBL, ficolins, and total protein content revealed the presence of difficult-to-remove immunoglobulin G, M and A, in some extent in agreement with recent findings suggesting crosstalk between IgG and ficolin-3. We demonstrated that recombinant ficolin-3 interacts with IgG and IgM in a concentration-dependent manner. Although this association does not appear to influence ficolin-3-ligand interactions in vitro, it may have numerous consequences in vivo. Thus our purification procedure provides Ig-ficolin-3/MASP complexes that might be useful for gaining further insight into the crosstalk and biological activity of ficolin-3. PMID:27232184

  9. Finger Vein Recognition Based on Personalized Weight Maps

    PubMed Central

    Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu

    2013-01-01

    Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition. PMID:24025556

  10. Correlation between the activity of digestive enzymes and nonself recognition in the gut of Eisenia andrei earthworms.

    PubMed

    Procházková, Petra; Šustr, Vladimír; Dvořák, Jiří; Roubalová, Radka; Škanta, František; Pižl, Václav; Bilej, Martin

    2013-11-01

    Earthworms Eisenia andrei, similarly to other invertebrates, rely on innate defense mechanisms based on the capability to recognize and respond to nonself. Here, we show a correlation between the expression of CCF, a crucial pattern-recognition receptor, and lysozyme, with enzyme activities in the gut of E. andrei earthworms following a microbial challenge. These data suggest that enzyme activities important for the release and recognition of molecular patterns by pattern-recognition molecules, as well as enzymes involved in effector pathways, are modulated during the microbial challenge. In particular, protease, laminarinase, and glucosaminidase activities were increased in parallel to up-regulated CCF and lysozyme expression.

  11. Advancing from offline to online activity recognition with wearable sensors.

    PubMed

    Ermes, Miikka; Parkka, Juha; Cluitmans, Luc

    2008-01-01

    Activity recognition with wearable sensors could motivate people to perform a variety of different sports and other physical exercises. We have earlier developed algorithms for offline analysis of activity data collected with wearable sensors. In this paper, we present our current progress in advancing the platform for the existing algorithms to an online version, onto a PDA. Acceleration data are obtained from wireless motion bands which send the 3D raw acceleration signals via a Bluetooth link to the PDA which then performs the data collection, feature extraction and activity classification. As a proof-of-concept, the online activity system was tested with three subjects. All of them performed at least 5 minutes of each of the following activities: lying, sitting, standing, walking, running and cycling with an exercise bike. The average second-by-second classification accuracies for the subjects were 99%, 97%, and 82 %. These results suggest that earlier developed offline analysis methods for the acceleration data obtained from wearable sensors can be successfully implemented in an online activity recognition application. PMID:19163702

  12. Finger Vein Recognition Based on a Personalized Best Bit Map

    PubMed Central

    Yang, Gongping; Xi, Xiaoming; Yin, Yilong

    2012-01-01

    Finger vein patterns have recently been recognized as an effective biometric identifier. In this paper, we propose a finger vein recognition method based on a personalized best bit map (PBBM). Our method is rooted in a local binary pattern based method and then inclined to use the best bits only for matching. We first present the concept of PBBM and the generating algorithm. Then we propose the finger vein recognition framework, which consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PBBM achieves not only better performance, but also high robustness and reliability. In addition, PBBM can be used as a general framework for binary pattern based recognition. PMID:22438735

  13. Combining Users' Activity Survey and Simulators to Evaluate Human Activity Recognition Systems

    PubMed Central

    Azkune, Gorka; Almeida, Aitor; López-de-Ipiña, Diego; Chen, Liming

    2015-01-01

    Evaluating human activity recognition systems usually implies following expensive and time-consuming methodologies, where experiments with humans are run with the consequent ethical and legal issues. We propose a novel evaluation methodology to overcome the enumerated problems, which is based on surveys for users and a synthetic dataset generator tool. Surveys allow capturing how different users perform activities of daily living, while the synthetic dataset generator is used to create properly labelled activity datasets modelled with the information extracted from surveys. Important aspects, such as sensor noise, varying time lapses and user erratic behaviour, can also be simulated using the tool. The proposed methodology is shown to have very important advantages that allow researchers to carry out their work more efficiently. To evaluate the approach, a synthetic dataset generated following the proposed methodology is compared to a real dataset computing the similarity between sensor occurrence frequencies. It is concluded that the similarity between both datasets is more than significant. PMID:25856329

  14. Identification of base and backbone contacts used for DNA sequence recognition and high-affinity binding by LAC9, a transcription activator containing a C6 zinc finger

    SciTech Connect

    Halvorsen, Yuan-Di C.; Nandabalan, K.; Dickson, R.C. )

    1991-04-01

    The LAC9 protein of Kluyveromyces lactis is a transcriptional regulator of genes in the lactose-galactose regulon. To regulate transcription, LAC9 must bind to 17-bp upstream activator sequences (UASs) located in front of each target gene. LAC9 is homologous to the GAL4 protein of Saccharomyces cerevisiae, and the two proteins must bind DNA in a very similar manner. In this paper the authors show that high-affinity, sequence-specific binding by LAC9 dimers is mediated primarily by 3 bp at each end of the UAS. In addition, at least one half of the UAS must have a GC or CG base pair at position 1 for high-affinity binding; LAC9k binds preferentially to the half containing the GC base pair. Hydroxyl radical footprinting shows that a LAC9 dimer binds an unusually broad region on one face of the DNA helix. Because of the data, they suggest that LAC9 contacts positions 6, 7, and 8, both plus and minus, of the UAS, which are separated by more than one turn of the DNA helix, and twists part way around the DNA, thus protecting the broad region of the minor groove between the major-groove contacts.

  15. Human suspicious activity recognition in thermal infrared video

    NASA Astrophysics Data System (ADS)

    Hossen, Jakir; Jacobs, Eddie; Chowdhury, Fahmida K.

    2014-10-01

    Detecting suspicious behaviors is important for surveillance and monitoring systems. In this paper, we investigate suspicious activity detection in thermal infrared imagery, where human motion can be easily detected from the background regardless of the lighting conditions and colors of the human clothing and surfaces. We use locally adaptive regression kernels (LARK) as patch descriptors, which capture the underlying local structure of the data exceedingly well, even in the presence of significant distortions. Patch descriptors are generated for each query patch and for each database patch. A statistical approach is used to match the query activity with the database to make the decision of suspicious activity. Human activity videos in different condition such as, walking, running, carrying a gun, crawling, and carrying backpack in different terrains were acquired using thermal infrared camera. These videos are used for training and performance evaluation of the algorithm. Experimental results show that the proposed approach achieves good performance in suspicious activity recognition.

  16. Event Recognition Based on Deep Learning in Chinese Texts.

    PubMed

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231

  17. Event Recognition Based on Deep Learning in Chinese Texts

    PubMed Central

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231

  18. Event Recognition Based on Deep Learning in Chinese Texts.

    PubMed

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  19. Design of speaker recognition system based on artificial neural network

    NASA Astrophysics Data System (ADS)

    Chen, Yanhong; Wang, Li; Lin, Han; Li, Jinlong

    2012-10-01

    Speaker recognition is to recognize speaker's identity from its voice which contains physiological and behavioral characteristics unique to each individual. In this paper, the artificial neural network model, which has very good capacity of non-linear division in characteristic space, is used for pattern matching. The speaker's sample characteristic domain is built for his mixed voice characteristic signals based on Kmeanlbg algorithm. Then the dimension of the inputting eigenvector is reduced, and the redundant information is got rid of. On this basis, BP neural network is used to divide capacity area for characteristic space nonlinearly, and the BP neural network acts as a classifier for the speaker. Finally, a speaker recognition system based on the neural network is realized and the experiment results validate the recognition performance and robustness of the system.

  20. Finger vein recognition based on local directional code.

    PubMed

    Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang

    2012-01-01

    Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194

  1. Multi-view indoor human behavior recognition based on 3D skeleton

    NASA Astrophysics Data System (ADS)

    Peng, Ling; Lu, Tongwei; Min, Feng

    2015-12-01

    For the problems caused by viewpoint changes in activity recognition, a multi-view interior human behavior recognition method based on 3D framework is presented. First, Microsoft's Kinect device is used to obtain body motion video in the positive perspective, the oblique angle and the side perspective. Second, it extracts bone joints and get global human features and the local features of arms and legs at the same time to form 3D skeletal features set. Third, online dictionary learning on feature set is used to reduce the dimension of feature. Finally, linear support vector machine (LSVM) is used to obtain the results of behavior recognition. The experimental results show that this method has better recognition rate.

  2. Recognition of Activities of Daily Living with Egocentric Vision: A Review

    PubMed Central

    Nguyen, Thi-Hoa-Cuc; Nebel, Jean-Christophe; Florez-Revuelta, Francisco

    2016-01-01

    Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory. PMID:26751452

  3. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors

    PubMed Central

    Gasparrini, Samuele

    2016-01-01

    The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed. PMID:27069469

  4. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors.

    PubMed

    Cippitelli, Enea; Gasparrini, Samuele; Gambi, Ennio; Spinsante, Susanna

    2016-01-01

    The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.

  5. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  6. A Star Pattern Recognition Method Based on Decreasing Redundancy Matching

    NASA Astrophysics Data System (ADS)

    Yao, Lu; Xiao-xiang, Zhang; Rong-yu, Sun

    2016-04-01

    During the optical observation of space objects, it is difficult to enable the background stars to get matched when the telescope pointing error and tracking error are significant. Based on the idea of decreasing redundancy matching, an effective recognition method for background stars is proposed in this paper. The simulative images under different conditions and the observed images are used to verify the proposed method. The experimental results show that the proposed method has raised the rate of recognition and reduced the time consumption, it can be used to match star patterns accurately and rapidly.

  7. Research on face recognition based on singular value decomposition

    NASA Astrophysics Data System (ADS)

    Liang, Yixiong; Gong, Weiguo; Pan, Yingjun; Liu, Jiamin; Li, Weihong; Zhang, Hongmei

    2004-08-01

    Singular values (SVs) feature vectors of face image have been used for face recognition as the feature recently. Although SVs have some important properties of algebraic and geometric invariance and insensitiveness to noise, they are the representation of face image in its own eigen-space spanned by the two orthogonal matrices of singular value decomposition (SVD) and clearly contain little useful information for face recognition. This study concentrates on extracting more informational feature from a frontal and upright view image based on SVD and proposing an improving method for face recognition. After standardized by intensity normalization, all training and testing face images are projected onto a uniform eigen-space that is obtained from SVD of standard face image. To achieve more computational efficiency, the dimension of the uniform eigen-space is reduced by discarding the eigenvectors that the corresponding eigenvalue is close to zero. Euclidean distance classifier is adopted in recognition. Two standard databases from Yale University and Olivetti research laboratory are selected to evaluate the recognition accuracy of the proposed method. These databases include face images with different expressions, small occlusion, different illumination condition and different poses. Experimental results on the two face databases show the effectiveness of the method and its insensitivity to the face expression, illumination and posture.

  8. Recognition technology research based on 3D fingerprint

    NASA Astrophysics Data System (ADS)

    Tian, Qianxiao; Huang, Shujun; Zhang, Zonghua

    2014-11-01

    Fingerprint has been widely studied and applied to personal recognition in both forensics and civilian. However, the current widespread used fingerprint is identified by 2D (two-dimensional) fingerprint image and the mapping from 3D (three-dimensional) to 2D loses 1D information, which leads to low accurate and even wrong recognition. This paper presents a 3D fingerprint recognition method based on the fringe projection technique. A series of fringe patterns generated by software are projected onto a finger surface through a projecting system. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. The deformed fringe pattern images give the 3D shape data of the finger and the 3D fingerprint features. Through converting the 3D fingerprints to 2D space, traditional 2D fingerprint recognition method can be used to 3D fingerprints recognition. Experimental results on measuring and recognizing some 3D fingerprints show the accuracy and availability of the developed 3D fingerprint system.

  9. Retrieval Failure Contributes to Gist-Based False Recognition

    ERIC Educational Resources Information Center

    Guerin, Scott A.; Robbins, Clifford A.; Gilmore, Adrian W.; Schacter, Daniel L.

    2012-01-01

    People often falsely recognize items that are similar to previously encountered items. This robust memory error is referred to as "gist-based false recognition". A widely held view is that this error occurs because the details fade rapidly from our memory. Contrary to this view, an initial experiment revealed that, following the same encoding…

  10. Micro-Based Speech Recognition: Instructional Innovation for Handicapped Learners.

    ERIC Educational Resources Information Center

    Horn, Carin E.; Scott, Brian L.

    A new voice based learning system (VBLS), which allows the handicapped user to interact with a microcomputer by voice commands, is described. Speech or voice recognition is the computerized process of identifying a spoken word or phrase, including those resulting from speech impediments. This new technology is helpful to the severely physically…

  11. Offline grammar-based recognition of handwritten sentences.

    PubMed

    Zimmermann, Matthias; Chappelier, Jean-Cédric; Bunke, Horst

    2006-05-01

    This paper proposes a sequential coupling of a Hidden Markov Model (HMM) recognizer for offline handwritten English sentences with a probabilistic bottom-up chart parser using Stochastic Context-Free Grammars (SCFG) extracted from a text corpus. Based on extensive experiments, we conclude that syntax analysis helps to improve recognition rates significantly.

  12. EEG-based emotion recognition in music listening.

    PubMed

    Lin, Yuan-Pin; Wang, Chi-Hong; Jung, Tzyy-Ping; Wu, Tien-Lin; Jeng, Shyh-Kang; Duann, Jeng-Ren; Chen, Jyh-Horng

    2010-07-01

    Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.

  13. Quaternion-Based Discriminant Analysis Method for Color Face Recognition

    PubMed Central

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  14. Hand vein recognition based on orientation of LBP

    NASA Astrophysics Data System (ADS)

    Bu, Wei; Wu, Xiangqian; Gao, Enying

    2012-06-01

    Vein recognition is becoming an effective method for personal recognition. Vein patterns lie under the skin surface of human body, and hence provide higher reliability than other biometric traits and hard to be damaged or faked. This paper proposes a novel vein feature representation method call orientation of local binary pattern (OLBP) which is an extension of local binary pattern (LBP). OLBP can represent the orientation information of the vein pixel which is an important characteristic of vein patterns. Moreover, the OLBP can also indicate on which side of the vein centerline the pixel locates. The OLBP feature maps are encoded by 4-bit binary values and an orientation distance is developed for efficient feature matching. Based on OLBP feature representation, we construct a hand vein recognition system employing multiple hand vein patterns include palm vein, dorsal vein, and three finger veins (index, middle, and ring finger). The experimental results on a large database demonstrate the effectiveness of the proposed approach.

  15. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  16. Visual-size molecular recognition based on gels.

    PubMed

    Tu, Tao; Fang, Weiwei; Sun, Zheming

    2013-10-01

    Since their discovery, stimuli-responsive organogels have garnered considerable and increasing attention from a broad range of research fields. In consideration of an one-dimensional ordered relay in anisotropic phase, the assembled gel networks can amplify various properties of the functional moieties possessed by the gelator molecules. Recently, substantial efforts have been focused on the development of facile, straightforward, and low-cost molecular recognition approaches by using nanostructured gel matrices as visual sensing platforms. In this research news, the recent progresses in macroscopic or visual-size molecular recognition for a number of homologues, isomers, and anions, as well as extremely challenging chiral enantiomers, using polymer and molecular gels are reviewed. Several strategies--including guest molecular competition, hydrogen-bonding blocking, and metal-coordination--for visual discrimination are included. Finally, the future trends and potential application in facile visual-size molecular recognition based on organogel matrices are highlighted. PMID:24089348

  17. Clonal Selection Based Artificial Immune System for Generalized Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry

    2011-01-01

    The last two decades has seen a rapid increase in the application of AIS (Artificial Immune Systems) modeled after the human immune system to a wide range of areas including network intrusion detection, job shop scheduling, classification, pattern recognition, and robot control. JPL (Jet Propulsion Laboratory) has developed an integrated pattern recognition/classification system called AISLE (Artificial Immune System for Learning and Exploration) based on biologically inspired models of B-cell dynamics in the immune system. When used for unsupervised or supervised classification, the method scales linearly with the number of dimensions, has performance that is relatively independent of the total size of the dataset, and has been shown to perform as well as traditional clustering methods. When used for pattern recognition, the method efficiently isolates the appropriate matches in the data set. The paper presents the underlying structure of AISLE and the results from a number of experimental studies.

  18. Improvements on EMG-based handwriting recognition with DTW algorithm.

    PubMed

    Li, Chengzhang; Ma, Zheren; Yao, Lin; Zhang, Dingguo

    2013-01-01

    Previous works have shown that Dynamic Time Warping (DTW) algorithm is a proper method of feature extraction for electromyography (EMG)-based handwriting recognition. In this paper, several modifications are proposed to improve the classification process and enhance recognition accuracy. A two-phase template making approach has been introduced to generate templates with more salient features, and modified Mahalanobis Distance (mMD) approach is used to replace Euclidean Distance (ED) in order to minimize the interclass variance. To validate the effectiveness of such modifications, experiments were conducted, in which four subjects wrote lowercase letters at a normal speed and four-channel EMG signals from forearms were recorded. Results of offline analysis show that the improvements increased the average recognition accuracy by 9.20%.

  19. Stimulus-based similarity and the recognition of spoken words

    NASA Astrophysics Data System (ADS)

    Auer, Edward T.

    2003-10-01

    Spoken word recognition has been hypothesized to be achieved via a competitive process amongst perceptually similar lexical candidates in the mental lexicon. In this process, lexical candidates are activated as a function of their perceived similarity to the spoken stimulus. The evidence supporting this hypothesis has largely come from studies of auditory word recognition. In this talk, evidence from our studies of visual spoken word recognition will be reviewed. Visual speech provides the opportunity to highlight the importance of stimulus-driven perceptual similarity because it presents a different pattern of segmental similarity than is afforded by auditory speech degraded by noise. Our results are consistent with stimulus-driven activation followed by competition as general spoken word recognition mechanism. In addition, results will be presented from recent investigations of the direct prediction of perceptual similarity from measurements of spoken stimuli. High levels of correlation have been observed between the predicted and perceptually obtained distances for a large set of spoken consonants. These results support the hypothesis that the perceptual structure of English consonants and vowels is predicted by stimulus structure without the need for an intervening level of abstract linguistic representation. [Research supported by NSF IIS 9996088 and NIH DC04856.

  20. Inertial Sensor-Based Gait Recognition: A Review.

    PubMed

    Sprager, Sebastijan; Juric, Matjaz B

    2015-01-01

    With the recent development of microelectromechanical systems (MEMS), inertial sensors have become widely used in the research of wearable gait analysis due to several factors, such as being easy-to-use and low-cost. Considering the fact that each individual has a unique way of walking, inertial sensors can be applied to the problem of gait recognition where assessed gait can be interpreted as a biometric trait. Thus, inertial sensor-based gait recognition has a great potential to play an important role in many security-related applications. Since inertial sensors are included in smart devices that are nowadays present at every step, inertial sensor-based gait recognition has become very attractive and emerging field of research that has provided many interesting discoveries recently. This paper provides a thorough and systematic review of current state-of-the-art in this field of research. Review procedure has revealed that the latest advanced inertial sensor-based gait recognition approaches are able to sufficiently recognise the users when relying on inertial data obtained during gait by single commercially available smart device in controlled circumstances, including fixed placement and small variations in gait. Furthermore, these approaches have also revealed considerable breakthrough by realistic use in uncontrolled circumstances, showing great potential for their further development and wide applicability. PMID:26340634

  1. Inertial Sensor-Based Gait Recognition: A Review

    PubMed Central

    Sprager, Sebastijan; Juric, Matjaz B.

    2015-01-01

    With the recent development of microelectromechanical systems (MEMS), inertial sensors have become widely used in the research of wearable gait analysis due to several factors, such as being easy-to-use and low-cost. Considering the fact that each individual has a unique way of walking, inertial sensors can be applied to the problem of gait recognition where assessed gait can be interpreted as a biometric trait. Thus, inertial sensor-based gait recognition has a great potential to play an important role in many security-related applications. Since inertial sensors are included in smart devices that are nowadays present at every step, inertial sensor-based gait recognition has become very attractive and emerging field of research that has provided many interesting discoveries recently. This paper provides a thorough and systematic review of current state-of-the-art in this field of research. Review procedure has revealed that the latest advanced inertial sensor-based gait recognition approaches are able to sufficiently recognise the users when relying on inertial data obtained during gait by single commercially available smart device in controlled circumstances, including fixed placement and small variations in gait. Furthermore, these approaches have also revealed considerable breakthrough by realistic use in uncontrolled circumstances, showing great potential for their further development and wide applicability. PMID:26340634

  2. Supervised Filter Learning for Representation Based Face Recognition

    PubMed Central

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  3. Supervised Filter Learning for Representation Based Face Recognition.

    PubMed

    Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue

    2016-01-01

    Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm. PMID:27416030

  4. Monitoring Activity for Recognition of Illness in Experimentally Infected Weaned Piglets Using Received Signal Strength Indication ZigBee-based Wireless Acceleration Sensor

    PubMed Central

    Ahmed, Sonia Tabasum; Mun, Hong-Seok; Islam, Md. Manirul; Yoe, Hyun; Yang, Chul-Ju

    2016-01-01

    In this experiment, we proposed and implemented a disease forecasting system using a received signal strength indication ZigBee-based wireless network with a 3-axis acceleration sensor to detect illness at an early stage by monitoring movement of experimentally infected weaned piglets. Twenty seven piglets were divided into control, Salmonella enteritidis (SE) infection, and Escherichia coli (EC) infection group, and their movements were monitored for five days using wireless sensor nodes on their backs. Data generated showed the 3-axis movement of piglets (X-axis: left and right direction, Y-axis: anteroposterior direction, and Z-axis: up and down direction) at five different time periods. Piglets in both infected groups had lower weight gain and feed intake, as well as higher feed conversion ratios than the control group (p<0.05). Infection with SE and EC resulted in reduced body temperature of the piglets at day 2, 4, and 5 (p<0.05). The early morning X-axis movement did not differ between groups; however, the Y-axis movement was higher in the EC group (day 1 and 2), and the Z-axis movement was higher in the EC (day 1) and SE group (day 4) during different experimental periods (p<0.05). The morning X and Y-axis movement did not differ between treatment groups. However, the Z-axis movement was higher in both infected groups at day 1 and lower at day 4 compared to the control (p<0.05). The midday X-axis movement was significantly lower in both infected groups (day 4 and 5) compared to the control (p<0.05), whereas the Y-axis movement did not differ. The Z-axis movement was highest in the SE group at day 1 and 2 and lower at day 4 and 5 (p<0.05). Evening X-axis movement was highest in the control group throughout the experimental period. During day 1 and 2, the Z-axis movement was higher in both of the infected groups; whereas it was lower in the SE group during day 3 and 4 (p<0.05). During day 1 and 2, the night X-axis movement was lower and the Z-axis movement

  5. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  6. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  7. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    PubMed

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation. PMID:26797612

  8. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    PubMed Central

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  9. Feature-based syntactic and metric shape recognition

    NASA Astrophysics Data System (ADS)

    Prasad, Lakshman; Skourikhine, Alexei N.; Schlei, Bernd R.

    2000-10-01

    We present a syntactic and metric two-dimensional shape recognition scheme based on shape features. The principal features of a shape can be extracted and semantically labeled by means of the chordal axis transform (CAT), with the resulting generic features, namely torsos and limbs, forming the primitive segmented features of the shape. We introduce a context-free universal language for representing all connected planar shapes in terms of their external features, based on a finite alphabet of generic shape feature primitives. Shape exteriors are then syntactically represented as strings in this language. Although this representation of shapes is not complete, in that it only describes their external features, it effectively captures shape embeddings, which are important properties of shapes for purposes of recognition. The elements of the syntactic strings are associated with attribute feature vectors that capture the metrical attributes of the corresponding features. We outline a hierarchical shape recognition scheme, wherein the syntactical representation of shapes may be 'telescoped' to yield a coarser or finer description for hierarchical comparison and matching. We finally extend the syntactic representation and recognition to completely represent all planar shapes, albeit without a generative context-free grammar for this extension.

  10. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  11. A bacterial tyrosine phosphatase inhibits plant pattern recognition receptor activation.

    PubMed

    Macho, Alberto P; Schwessinger, Benjamin; Ntoukakis, Vardis; Brutus, Alexandre; Segonzac, Cécile; Roy, Sonali; Kadota, Yasuhiro; Oh, Man-Ho; Sklenar, Jan; Derbyshire, Paul; Lozano-Durán, Rosa; Malinovsky, Frederikke Gro; Monaghan, Jacqueline; Menke, Frank L; Huber, Steven C; He, Sheng Yang; Zipfel, Cyril

    2014-03-28

    Innate immunity relies on the perception of pathogen-associated molecular patterns (PAMPs) by pattern-recognition receptors (PRRs) located on the host cell's surface. Many plant PRRs are kinases. Here, we report that the Arabidopsis receptor kinase EF-TU RECEPTOR (EFR), which perceives the elf18 peptide derived from bacterial elongation factor Tu, is activated upon ligand binding by phosphorylation on its tyrosine residues. Phosphorylation of a single tyrosine residue, Y836, is required for activation of EFR and downstream immunity to the phytopathogenic bacterium Pseudomonas syringae. A tyrosine phosphatase, HopAO1, secreted by P. syringae, reduces EFR phosphorylation and prevents subsequent immune responses. Thus, host and pathogen compete to take control of PRR tyrosine phosphorylation used to initiate antibacterial immunity.

  12. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  13. Compound character recognition by run-number-based metric distance

    NASA Astrophysics Data System (ADS)

    Garain, Uptal; Chaudhuri, B. B.

    1998-04-01

    This paper concerns automatic OCR of Bangla, a major Indian Language Script which is the fourth most popular script in the world. A Bangla OCR system has to recognize about 300 graphemic shapes among which 250 compound characters have quite complex stroke patterns. For recognition of such compound characters, feature based approaches are less reliable and template based approaches are less flexible to size and style variation of character font. We combine the positive aspects of feature based and template based approaches. Here we propose a run number based normalized template matching technique for compound character recognition. Run number vectors for both horizontal and vertical scanning are computed. As the number of scans may very from pattern to pattern, we normalize and abbreviate the vector. We prove that this normalized and abbreviated vector induces metric distance metric distance. Moreover, this vector is invariant to scaling, insensitive to character style variation and more effective for more complex-shaped characters than simple-shaped ones. We use this vector representation for matching within a group of compound characters. We notice that the matching is more efficient if the vector is reorganized with respect to the centroid of the pattern. We have tested our approach on a large set of segmented compounds characters at different point sizes as well as different styles. Italic characters are subject to preprocessing. The overall correct recognition rate is 99.69 percent.

  14. Wavelet-based ground vehicle recognition using acoustic signals

    NASA Astrophysics Data System (ADS)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  15. A wavelet-based method for multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  16. Pattern recognition for electroencephalographic signals based on continuous neural networks.

    PubMed

    Alfaro-Ponce, M; Argüelles, A; Chairez, I

    2016-07-01

    This study reports the design and implementation of a pattern recognition algorithm to classify electroencephalographic (EEG) signals based on artificial neural networks (NN) described by ordinary differential equations (ODEs). The training method for this kind of continuous NN (CNN) was developed according to the Lyapunov theory stability analysis. A parallel structure with fixed weights was proposed to perform the classification stage. The pattern recognition efficiency was validated by two methods, a generalization-regularization and a k-fold cross validation (k=5). The classifier was applied on two different databases. The first one was made up by signals collected from patients suffering of epilepsy and it is divided in five different classes. The second database was made up by 90 single EEG trials, divided in three classes. Each class corresponds to a different visual evoked potential. The pattern recognition algorithm achieved a maximum correct classification percentage of 97.2% using the information of the entire database. This value was similar to some results previously reported when this database was used for testing pattern classification. However, these results were obtained when only two classes were considered for the testing. The result reported in this study used the whole set of signals (five different classes). In comparison with similar pattern recognition methods that even considered less number of classes, the proposed CNN proved to achieve the same or even better correct classification results.

  17. Pattern recognition for electroencephalographic signals based on continuous neural networks.

    PubMed

    Alfaro-Ponce, M; Argüelles, A; Chairez, I

    2016-07-01

    This study reports the design and implementation of a pattern recognition algorithm to classify electroencephalographic (EEG) signals based on artificial neural networks (NN) described by ordinary differential equations (ODEs). The training method for this kind of continuous NN (CNN) was developed according to the Lyapunov theory stability analysis. A parallel structure with fixed weights was proposed to perform the classification stage. The pattern recognition efficiency was validated by two methods, a generalization-regularization and a k-fold cross validation (k=5). The classifier was applied on two different databases. The first one was made up by signals collected from patients suffering of epilepsy and it is divided in five different classes. The second database was made up by 90 single EEG trials, divided in three classes. Each class corresponds to a different visual evoked potential. The pattern recognition algorithm achieved a maximum correct classification percentage of 97.2% using the information of the entire database. This value was similar to some results previously reported when this database was used for testing pattern classification. However, these results were obtained when only two classes were considered for the testing. The result reported in this study used the whole set of signals (five different classes). In comparison with similar pattern recognition methods that even considered less number of classes, the proposed CNN proved to achieve the same or even better correct classification results. PMID:27131469

  18. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-01-01

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home. PMID:26007738

  19. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-01-01

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  20. New Robust Face Recognition Methods Based on Linear Regression

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing; Wen, Jiajun

    2012-01-01

    Nearest subspace (NS) classification based on linear regression technique is a very straightforward and efficient method for face recognition. A recently developed NS method, namely the linear regression-based classification (LRC), uses downsampled face images as features to perform face recognition. The basic assumption behind this kind method is that samples from a certain class lie on their own class-specific subspace. Since there are only few training samples for each individual class, which will cause the small sample size (SSS) problem, this problem gives rise to misclassification of previous NS methods. In this paper, we propose two novel LRC methods using the idea that every class-specific subspace has its unique basis vectors. Thus, we consider that each class-specific subspace is spanned by two kinds of basis vectors which are the common basis vectors shared by many classes and the class-specific basis vectors owned by one class only. Based on this concept, two classification methods, namely robust LRC 1 and 2 (RLRC 1 and 2), are given to achieve more robust face recognition. Unlike some previous methods which need to extract class-specific basis vectors, the proposed methods are developed merely based on the existence of the class-specific basis vectors but without actually calculating them. Experiments on three well known face databases demonstrate very good performance of the new methods compared with other state-of-the-art methods. PMID:22879992

  1. Biometric verification based on grip-pattern recognition

    NASA Astrophysics Data System (ADS)

    Veldhuis, Raymond N.; Bazen, Asker M.; Kauffman, Joost A.; Hartel, Pieter

    2004-06-01

    This paper describes the design, implementation and evaluation of a user-verification system for a smart gun, which is based on grip-pattern recognition. An existing pressure sensor consisting of an array of 44 × 44 piezoresistive elements is used to measure the grip pattern. An interface has been developed to acquire pressure images from the sensor. The values of the pixels in the pressure-pattern images are used as inputs for a verification algorithm, which is currently implemented in software on a PC. The verification algorithm is based on a likelihoodratio classifier for Gaussian probability densities. First results indicate that it is feasible to use grip-pattern recognition for biometric verification.

  2. Production ready feature recognition based automatic group technology part coding

    SciTech Connect

    Ames, A.L.

    1990-01-01

    During the past four years, a feature recognition based expert system for automatically performing group technology part coding from solid model data has been under development. The system has become a production quality tool, capable of quickly the geometry based portions of a part code with no human intervention. It has been tested on over 200 solid models, half of which are models of production Sandia designs. Its performance rivals that of humans performing the same task, often surpassing them in speed and uniformity. The feature recognition capability developed for part coding is being extended to support other applications, such as manufacturability analysis, automatic decomposition (for finite element meshing and machining), and assembly planning. Initial surveys of these applications indicate that the current capability will provide a strong basis for other applications and that extensions toward more global geometric reasoning and tighter coupling with solid modeler functionality will be necessary.

  3. Novel, ERP-based, concealed information detection: Combining recognition-based and feedback-evoked ERPs.

    PubMed

    Sai, Liyang; Lin, Xiaohong; Rosenfeld, J Peter; Sang, Biao; Hu, Xiaoqing; Fu, Genyue

    2016-02-01

    The present study introduced a novel variant of the concealed information test (CIT), called the feedback-CIT. By providing participants with feedbacks regarding their memory concealment performance during the CIT, we investigated the feedback-related neural activity underlying memory concealment. Participants acquired crime-relevant memories via enacting a lab crime, and were tested with the feedback-CIT while EEGs were recorded. We found that probes (e.g., crime-relevant memories) elicited larger recognition-P300s than irrelevants among guilty participants. Moreover, feedback-related negativity (FRN) and feedback-P300 could also discriminate probes from irrelevants among guilty participants. Both recognition- and feedback-ERPs were highly effective in distinguishing between guilty and innocent participants (recognition-P300: AUC=.73; FRN: AUC=.95; feedback-P300: AUC=.97). This study sheds new light on brain-based memory detection, such that feedback-related neural signals can be employed to detect concealed memories.

  4. Speckle-learning-based object recognition through scattering media.

    PubMed

    Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun

    2015-12-28

    We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning. PMID:26832049

  5. A novel morphometry-based protocol of automated video-image analysis for species recognition and activity rhythms monitoring in deep-sea fauna.

    PubMed

    Aguzzi, Jacopo; Costa, Corrado; Fujiwara, Yoshihiro; Iwase, Ryoichi; Ramirez-Llorda, Eva; Menesatti, Paolo

    2009-01-01

    The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction) is accompanied by species identification from animals' outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan) was analysed. Out of 150,000 frames (1 per 4 s), a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts), red crabs (Paralomis multispina), and snails (Buccinum soyomaruae). Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea.

  6. A Novel Morphometry-Based Protocol of Automated Video-Image Analysis for Species Recognition and Activity Rhythms Monitoring in Deep-Sea Fauna

    PubMed Central

    Aguzzi, Jacopo; Costa, Corrado; Fujiwara, Yoshihiro; Iwase, Ryoichi; Ramirez-Llorda, Eva; Menesatti, Paolo

    2009-01-01

    The understanding of ecosystem dynamics in deep-sea areas is to date limited by technical constraints on sampling repetition. We have elaborated a morphometry-based protocol for automated video-image analysis where animal movement tracking (by frame subtraction) is accompanied by species identification from animals' outlines by Fourier Descriptors and Standard K-Nearest Neighbours methods. One-week footage from a permanent video-station located at 1,100 m depth in Sagami Bay (Central Japan) was analysed. Out of 150,000 frames (1 per 4 s), a subset of 10.000 was analyzed by a trained operator to increase the efficiency of the automated procedure. Error estimation of the automated and trained operator procedure was computed as a measure of protocol performance. Three displacing species were identified as the most recurrent: Zoarcid fishes (eelpouts), red crabs (Paralomis multispina), and snails (Buccinum soyomaruae). Species identification with KNN thresholding produced better results in automated motion detection. Results were discussed assuming that the technological bottleneck is to date deeply conditioning the exploration of the deep-sea. PMID:22291517

  7. Part-based set matching for face recognition in surveillance

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Wang, Guijin; Lin, Xinggang

    2013-12-01

    Face recognition in surveillance is a hot topic in computer vision due to the strong demand for public security and remains a challenging task owing to large variations in viewpoint and illumination of cameras. In surveillance, image sets are the most natural form of input by incorporating tracking. Recent advances in set-based matching also show its great potential for exploring the feature space for face recognition by making use of multiple samples of subjects. In this paper, we propose a novel method that exploits the salient features (such as eyes, noses, mouth) in set-based matching. To represent image sets, we adopt the affine hull model, which can general unseen appearances in the form of affine combinations of sample images. In our proposal, a robust part detector is first used to find four salient parts for each face image: two eyes, nose, and mouth. For each part, we construct an affine hull model by using the local binary pattern histograms of multiple samples of the part. We also construct an affine model for the whole face region. Then, we find the closest distance between the corresponding affine hull models to measure the similarity between parts/face regions, and a weighting scheme is introduced to combine the five distances (four parts and the whole face region) to obtain the final distance between two subjects. In the recognition phase, a nearest neighbor classifier is used. Experiments on the public ChokePoint dataset and our dataset demonstrate the superior performance of our method.

  8. Emotion recognition based on the sample entropy of EEG.

    PubMed

    Jie, Xiang; Cao, Rui; Li, Li

    2014-01-01

    A sample entropy (SampEn)-based emotion recognition approach was presented. The SampEn results of notable EEG channels screened by K-S test were fed to the support vector machine (SVM)-weight classifier for training, after which it was applied to two emotion recognition tasks. One is to distinguish positive and negative emotion with high arousal and the other genitive emotion with different arousal status. Results showed that channels related to emotions were mostly located on the prefrontal region, i.e., F3, CP5, FP2, FZ, and FC2. And they were applied to form the input vectors of SVM-weight classifier. The accuracies of the present algorithm for the two tasks were 80.43% and 79.11%, respectively indicated by the leave-one-person-out validation procedure, demonstrating that the present algorithm had a reasonable generalization capability. PMID:24212012

  9. RNA structural motif recognition based on least-squares distance.

    PubMed

    Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin

    2013-09-01

    RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.

  10. Optical fingerprint recognition based on local minutiae structure coding.

    PubMed

    Yi, Yao; Cao, Liangcai; Guo, Wei; Luo, Yaping; Feng, Jianjiang; He, Qingsheng; Jin, Guofan

    2013-07-15

    A parallel volume holographic optical fingerprint recognition system robust to fingerprint translation, rotation and nonlinear distortion is proposed. The optical fingerprint recognition measures the similarity by using the optical filters of multiplexed holograms recorded in the holographic media. A fingerprint is encoded into multiple template data pages based on the local minutiae structure coding method after it is adapted for the optical data channel. An improved filter recording time schedule and a post-filtering calibration technology are combined to suppress the calculating error from the large variations in data page filling ratio. Experimental results tested on FVC2002 DB1 and a forensic database comprising 270,216 fingerprints demonstrate the robustness and feasibility of the system.

  11. Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors.

    PubMed

    Wang, Lukun

    2016-02-04

    This paper provides an approach for recognizing human activities with wearable sensors. The continuous autoencoder (CAE) as a novel stochastic neural network model is proposed which improves the ability of model continuous data. CAE adds Gaussian random units into the improved sigmoid activation function to extract the features of nonlinear data. In order to shorten the training time, we propose a new fast stochastic gradient descent (FSGD) algorithm to update the gradients of CAE. The reconstruction of a swiss-roll dataset experiment demonstrates that the CAE can fit continuous data better than the basic autoencoder, and the training time can be reduced by an FSGD algorithm. In the experiment of human activities' recognition, time and frequency domain feature extract (TFFE) method is raised to extract features from the original sensors' data. Then, the principal component analysis (PCA) method is applied to feature reduction. It can be noticed that the dimension of each data segment is reduced from 5625 to 42. The feature vectors extracted from original signals are used for the input of deep belief network (DBN), which is composed of multiple CAEs. The training results show that the correct differentiation rate of 99.3% has been achieved. Some contrast experiments like different sensors combinations, sensor units at different positions, and training time with different epochs are designed to validate our approach.

  12. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    PubMed Central

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037

  13. A vocal-based analytical method for goose behaviour recognition.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.

  14. Three dimensional pattern recognition using feature-based indexing and rule-based search

    NASA Astrophysics Data System (ADS)

    Lee, Jae-Kyu

    In flexible automated manufacturing, robots can perform routine operations as well as recover from atypical events, provided that process-relevant information is available to the robot controller. Real time vision is among the most versatile sensing tools, yet the reliability of machine-based scene interpretation can be questionable. The effort described here is focused on the development of machine-based vision methods to support autonomous nuclear fuel manufacturing operations in hot cells. This thesis presents a method to efficiently recognize 3D objects from 2D images based on feature-based indexing. Object recognition is the identification of correspondences between parts of a current scene and stored views of known objects, using chains of segments or indexing vectors. To create indexed object models, characteristic model image features are extracted during preprocessing. Feature vectors representing model object contours are acquired from several points of view around each object and stored. Recognition is the process of matching stored views with features or patterns detected in a test scene. Two sets of algorithms were developed, one for preprocessing and indexed database creation, and one for pattern searching and matching during recognition. At recognition time, those indexing vectors with the highest match probability are retrieved from the model image database, using a nearest neighbor search algorithm. The nearest neighbor search predicts the best possible match candidates. Extended searches are guided by a search strategy that employs knowledge-base (KB) selection criteria. The knowledge-based system simplifies the recognition process and minimizes the number of iterations and memory usage. Novel contributions include the use of a feature-based indexing data structure together with a knowledge base. Both components improve the efficiency of the recognition process by improved structuring of the database of object features and reducing data base size

  15. New algorithm for iris recognition based on video sequences

    NASA Astrophysics Data System (ADS)

    Bourennane, Salah; Fossati, Caroline; Ketchantang, William

    2010-07-01

    Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.

  16. Image quality-based adaptive illumination normalisation for face recognition

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Automatic face recognition is a challenging task due to intra-class variations. Changes in lighting conditions during enrolment and identification stages contribute significantly to these intra-class variations. A common approach to address the effects such of varying conditions is to pre-process the biometric samples in order normalise intra-class variations. Histogram equalisation is a widely used illumination normalisation technique in face recognition. However, a recent study has shown that applying histogram equalisation on well-lit face images could lead to a decrease in recognition accuracy. This paper presents a dynamic approach to illumination normalisation, based on face image quality. The quality of a given face image is measured in terms of its luminance distortion by comparing this image against a known reference face image. Histogram equalisation is applied to a probe image if its luminance distortion is higher than a predefined threshold. We tested the proposed adaptive illumination normalisation method on the widely used Extended Yale Face Database B. Identification results demonstrate that our adaptive normalisation produces better identification accuracy compared to the conventional approach where every image is normalised, irrespective of the lighting condition they were acquired.

  17. Improving representation-based classification for robust face recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhi; Zhang, Zheng; Li, Zhengming; Chen, Yan; Shi, Jian

    2014-06-01

    The sparse representation classification (SRC) method proposed by Wright et al. is considered as the breakthrough of face recognition because of its good performance. Nevertheless it still cannot perfectly address the face recognition problem. The main reason for this is that variation of poses, facial expressions, and illuminations of the facial image can be rather severe and the number of available facial images are fewer than the dimensions of the facial image, so a certain linear combination of all the training samples is not able to fully represent the test sample. In this study, we proposed a novel framework to improve the representation-based classification (RBC). The framework first ran the sparse representation algorithm and determined the unavoidable deviation between the test sample and optimal linear combination of all the training samples in order to represent it. It then exploited the deviation and all the training samples to resolve the linear combination coefficients. Finally, the classification rule, the training samples, and the renewed linear combination coefficients were used to classify the test sample. Generally, the proposed framework can work for most RBC methods. From the viewpoint of regression analysis, the proposed framework has a solid theoretical soundness. Because it can, to an extent, identify the bias effect of the RBC method, it enables RBC to obtain more robust face recognition results. The experimental results on a variety of face databases demonstrated that the proposed framework can improve the collaborative representation classification, SRC, and improve the nearest neighbor classifier.

  18. Hessian-Regularized Co-Training for Social Activity Recognition

    PubMed Central

    Liu, Weifeng; Li, Yang; Lin, Xu; Tao, Dacheng; Wang, Yanjiang

    2014-01-01

    Co-training is a major multi-view learning paradigm that alternately trains two classifiers on two distinct views and maximizes the mutual agreement on the two-view unlabeled data. Traditional co-training algorithms usually train a learner on each view separately and then force the learners to be consistent across views. Although many co-trainings have been developed, it is quite possible that a learner will receive erroneous labels for unlabeled data when the other learner has only mediocre accuracy. This usually happens in the first rounds of co-training, when there are only a few labeled examples. As a result, co-training algorithms often have unstable performance. In this paper, Hessian-regularized co-training is proposed to overcome these limitations. Specifically, each Hessian is obtained from a particular view of examples; Hessian regularization is then integrated into the learner training process of each view by penalizing the regression function along the potential manifold. Hessian can properly exploit the local structure of the underlying data manifold. Hessian regularization significantly boosts the generalizability of a classifier, especially when there are a small number of labeled examples and a large number of unlabeled examples. To evaluate the proposed method, extensive experiments were conducted on the unstructured social activity attribute (USAA) dataset for social activity recognition. Our results demonstrate that the proposed method outperforms baseline methods, including the traditional co-training and LapCo algorithms. PMID:25259945

  19. Recognition of Human Activities Using Continuous Autoencoders with Wearable Sensors

    PubMed Central

    Wang, Lukun

    2016-01-01

    This paper provides an approach for recognizing human activities with wearable sensors. The continuous autoencoder (CAE) as a novel stochastic neural network model is proposed which improves the ability of model continuous data. CAE adds Gaussian random units into the improved sigmoid activation function to extract the features of nonlinear data. In order to shorten the training time, we propose a new fast stochastic gradient descent (FSGD) algorithm to update the gradients of CAE. The reconstruction of a swiss-roll dataset experiment demonstrates that the CAE can fit continuous data better than the basic autoencoder, and the training time can be reduced by an FSGD algorithm. In the experiment of human activities’ recognition, time and frequency domain feature extract (TFFE) method is raised to extract features from the original sensors’ data. Then, the principal component analysis (PCA) method is applied to feature reduction. It can be noticed that the dimension of each data segment is reduced from 5625 to 42. The feature vectors extracted from original signals are used for the input of deep belief network (DBN), which is composed of multiple CAEs. The training results show that the correct differentiation rate of 99.3% has been achieved. Some contrast experiments like different sensors combinations, sensor units at different positions, and training time with different epochs are designed to validate our approach. PMID:26861319

  20. Recognition-Based Pedagogy: Teacher Candidates' Experience of Deficit

    ERIC Educational Resources Information Center

    Parkison, Paul T.; DaoJensen, Thuy

    2014-01-01

    This study seeks to introduce what we call "recognition-based pedagogy" as a conceptual frame through which teachers and instructors can collaboratively develop educative experiences with students. Recognition-based pedagogy connects the theories of critical pedagogy, identity politics, and the politics of recognition with the educative…

  1. Emotion recognition based on physiological changes in music listening.

    PubMed

    Kim, Jonghwa; André, Elisabeth

    2008-12-01

    Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme.

  2. Emotion recognition based on physiological changes in music listening.

    PubMed

    Kim, Jonghwa; André, Elisabeth

    2008-12-01

    Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme. PMID:18988943

  3. A national knowledge-based crop recognition in Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Cohen, Yafit; Shoshany, Maxim

    2002-08-01

    Population growth, urban expansion, land degradation, civil strife and war may place plant natural resources for food and agriculture at risk. Crop and yield monitoring is basic information necessary for wise management of these resources. Satellite remote sensing techniques have proven to be cost-effective in widespread agricultural lands in Africa, America, Europe and Australia. However, they have had limited success in Mediterranean regions that are characterized by a high rate of spatio-temporal ecological heterogeneity and high fragmentation of farming lands. An integrative knowledge-based approach is needed for this purpose, which combines imagery and geographical data within the framework of an intelligent recognition system. This paper describes the development of such a crop recognition methodology and its application to an area that comprises approximately 40% of the cropland in Israel. This area contains eight crop types that represent 70% of Israeli agricultural production. Multi-date Landsat TM images representing seasonal vegetation cover variations were converted to normalized difference vegetation index (NDVI) layers. Field boundaries were delineated by merging Landsat data with SPOT-panchromatic images. Crop recognition was then achieved in two-phases, by clustering multi-temporal NDVI layers using unsupervised classification, and then applying 'split-and-merge' rules to these clusters. These rules were formalized through comprehensive learning of relationships between crop types, imagery properties (spectral and NDVI) and auxiliary data including agricultural knowledge, precipitation and soil types. Assessment of the recognition results using ground data from the Israeli Agriculture Ministry indicated an average recognition accuracy exceeding 85% which accounts for both omission and commission errors. The two-phase strategy implemented in this study is apparently successful for heterogeneous regions. This is due to the fact that it allows

  4. ssDNA-Functionalized Nanoceria: A Redox-Active Aptaswitch for Biomolecular Recognition.

    PubMed

    Bülbül, Gonca; Hayat, Akhtar; Andreescu, Silvana

    2016-04-01

    Quantification of biomolecular binding events is a critical step for the development of biorecognition assays for diagnostics and therapeutic applications. This paper reports the design of redox-active switches based on aptamer conjugated nanoceria for detection and quantification of biomolecular recognition. It is shown that the conformational transition state of the aptamer on nanoceria, combined with the redox properties of these particles can be used to create surface based structure switchable aptasensing platforms. Changes in the redox properties at the nanoceria surface upon binding of the ssDNA and its target analyte enables rapid and highly sensitive measurement of biomolecular interactions. This concept is demonstrated as a general applicable method to the colorimetric detection of DNA binding events. An example of a nanoceria aptaswitch for the colorimetric sensing of Ochratoxin A (OTA) and applicability to other targets is provided. The system can sensitively and selectivity detect as low as 0.15 × 10(-9) m OTA. This novel assay is simple in design and does not involve oligonucleotide labeling or elaborate nanoparticle modification steps. The proposed mechanism discovered here opens up a new way of designing optical sensing methods based on aptamer recognition. This approach can be broadly applicable to many bimolecular recognition processes and related applications. PMID:26844813

  5. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  6. M pathway and areas 44 and 45 are involved in stereoscopic recognition based on binocular disparity.

    PubMed

    Negawa, Tsuneo; Mizuno, Shinji; Hahashi, Tomoya; Kuwata, Hiromi; Tomida, Mihoko; Hoshi, Hiroaki; Era, Seiichi; Kuwata, Kazuo

    2002-04-01

    We characterized the visual pathways involved in the stereoscopic recognition of the random dot stereogram based on the binocular disparity employing a functional magnetic resonance imaging (fMRI). The V2, V3, V4, V5, intraparietal sulcus (IPS) and the superior temporal sulcus (STS) were significantly activated during the binocular stereopsis, but the inferotemporal gyrus (ITG) was not activated. Thus a human M pathway may be part of a network involved in the stereoscopic processing based on the binocular disparity. It is intriguing that areas 44 (Broca's area) and 45 in the left hemisphere were also active during the binocular stereopsis. However, it was reported that these regions were inactive during the monocular stereopsis. To separate the specific responses directly caused by the stereoscopic recognition process from the nonspecific ones caused by the memory load or the intention, we designed a novel frequency labeled tasks (FLT) sequence. The functional MRI using the FLT indicated that the activation of areas 44 and 45 is correlated with the stereoscopic recognition based on the binocular disparity but not with the intention artifacts, suggesting that areas 44 and 45 play an essential role in the binocular disparity. PMID:12139777

  7. M pathway and areas 44 and 45 are involved in stereoscopic recognition based on binocular disparity.

    PubMed

    Negawa, Tsuneo; Mizuno, Shinji; Hahashi, Tomoya; Kuwata, Hiromi; Tomida, Mihoko; Hoshi, Hiroaki; Era, Seiichi; Kuwata, Kazuo

    2002-04-01

    We characterized the visual pathways involved in the stereoscopic recognition of the random dot stereogram based on the binocular disparity employing a functional magnetic resonance imaging (fMRI). The V2, V3, V4, V5, intraparietal sulcus (IPS) and the superior temporal sulcus (STS) were significantly activated during the binocular stereopsis, but the inferotemporal gyrus (ITG) was not activated. Thus a human M pathway may be part of a network involved in the stereoscopic processing based on the binocular disparity. It is intriguing that areas 44 (Broca's area) and 45 in the left hemisphere were also active during the binocular stereopsis. However, it was reported that these regions were inactive during the monocular stereopsis. To separate the specific responses directly caused by the stereoscopic recognition process from the nonspecific ones caused by the memory load or the intention, we designed a novel frequency labeled tasks (FLT) sequence. The functional MRI using the FLT indicated that the activation of areas 44 and 45 is correlated with the stereoscopic recognition based on the binocular disparity but not with the intention artifacts, suggesting that areas 44 and 45 play an essential role in the binocular disparity.

  8. Gait recognition using spatio-temporal silhouette-based features

    NASA Astrophysics Data System (ADS)

    Sabir, Azhin; Al-jawad, Naseer; Jassim, Sabah

    2013-05-01

    This paper presents a new algorithm for human gait recognition based on Spatio-temporal body biometric features using wavelet transforms. The proposed algorithm extracts the Gait cycle depending on the width of boundary box from a sequence of Silhouette images. Gait recognition is based on feature level fusion of three feature vectors: the gait spatio-temporal feature represented by the distances between (feet, knees, hands, shoulders, and height); binary difference between consecutive frames of the silhouette for each leg detected separately based on hamming distance; a vector of statistical parameters captured from the wavelet low frequency domain. The fused feature vector is subjected to dimension reduction using linear discriminate analysis. The Nearest Neighbour with a certain threshold used for classification. The threshold is obtained by experiment from a set of data captured from the CASIA database. We shall demonstrate that our method provides a non-traditional identification based on certain threshold to classify the outsider members as non-classified members.

  9. Learning and plan refinement in a knowledge-based system for automatic speech recognition

    SciTech Connect

    De Mori, R.; Lam, L.; Gilloux, M.

    1987-03-01

    This paper shows how a semiautomatic design of a speech recognition system can be done as a planning activity. Recognition performances are used for deciding plan refinement. Inductive learning is performed for setting action preconditions. Experimental results in the recognition of connected letters spoken by 100 speakers are presented.

  10. Cough Recognition Based on Mel Frequency Cepstral Coefficients and Dynamic Time Warping

    NASA Astrophysics Data System (ADS)

    Zhu, Chunmei; Liu, Baojun; Li, Ping

    Cough recognition provides important clinical information for the treatment of many respiratory diseases, but the assessment of cough frequency over a long period of time remains unsatisfied for either clinical or research purpose. In this paper, according to the advantage of dynamic time warping (DTW) and the characteristic of cough recognition, an attempt is made to adapt DTW as the recognition algorithm for cough recognition. The process of cough recognition based on mel frequency cepstral coefficients (MFCC) and DTW is introduced. Experiment results of testing samples from 3 subjects show that acceptable performances of cough recognition are obtained by DTW with a small training set.

  11. Infrared target recognition based on improved joint local ternary pattern

    NASA Astrophysics Data System (ADS)

    Sun, Junding; Wu, Xiaosheng

    2016-05-01

    This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.

  12. Track-based event recognition in a realistic crowded environment

    NASA Astrophysics Data System (ADS)

    van Huis, Jasper R.; Bouma, Henri; Baan, Jan; Burghouts, Gertjan J.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Dijk, Judith; van Rest, Jeroen H.

    2014-10-01

    Automatic detection of abnormal behavior in CCTV cameras is important to improve the security in crowded environments, such as shopping malls, airports and railway stations. This behavior can be characterized at different time scales, e.g., by small-scale subtle and obvious actions or by large-scale walking patterns and interactions between people. For example, pickpocketing can be recognized by the actual snatch (small scale), when he follows the victim, or when he interacts with an accomplice before and after the incident (longer time scale). This paper focusses on event recognition by detecting large-scale track-based patterns. Our event recognition method consists of several steps: pedestrian detection, object tracking, track-based feature computation and rule-based event classification. In the experiment, we focused on single track actions (walk, run, loiter, stop, turn) and track interactions (pass, meet, merge, split). The experiment includes a controlled setup, where 10 actors perform these actions. The method is also applied to all tracks that are generated in a crowded shopping mall in a selected time frame. The results show that most of the actions can be detected reliably (on average 90%) at a low false positive rate (1.1%), and that the interactions obtain lower detection rates (70% at 0.3% FP). This method may become one of the components that assists operators to find threatening behavior and enrich the selection of videos that are to be observed.

  13. A novel polar-based human face recognition computational model.

    PubMed

    Zana, Y; Mena-Chalco, J P; Cesar, R M

    2009-07-01

    Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing. PMID:19578643

  14. Embedded knowledge-based system for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Aboutalib, A. O.

    1990-10-01

    The development of a reliable Automatic Target Recognition (ATE) system is considered a very critical and challenging problem. Existing ATE Systems have inherent limitations in terms of recognition performance and the ability to learn and adapt. Artificial Intelligence Techniques have the potential to improve the performance of ATh Systems. In this paper, we presented a novel Knowledge-Engineering tool, termed, the Automatic Reasoning Process (ARP) , that can be used to automatically develop and maintain a Knowledge-Base (K-B) for the ATR Systems. In its learning mode, the ARP utilizes Learning samples to automatically develop the ATR K-B, which consists of minimum size sets of necessary and sufficient conditions for each target class. In its operational mode, the ARP infers the target class from sensor data using the ATh K-B System. The ARP also has the capability to reason under uncertainty, and can support both statistical and model-based approaches for ATR development. The capabilities of the ARP are compared and contrasted to those of another Knowledge-Engineering tool, termed, the Automatic Rule Induction (ARI) which is based on maximizing the mutual information. The AR? has been implemented in LISP on a VAX-GPX workstation.

  15. Nonparametric Feature Matching Based Conditional Random Fields for Gesture Recognition from Multi-Modal Video.

    PubMed

    Chang, Ju Yong

    2016-08-01

    We present a new gesture recognition method that is based on the conditional random field (CRF) model using multiple feature matching. Our approach solves the labeling problem, determining gesture categories and their temporal ranges at the same time. A generative probabilistic model is formalized and probability densities are nonparametrically estimated by matching input features with a training dataset. In addition to the conventional skeletal joint-based features, the appearance information near the active hand in an RGB image is exploited to capture the detailed motion of fingers. The estimated likelihood function is then used as the unary term for our CRF model. The smoothness term is also incorporated to enforce the temporal coherence of our solution. Frame-wise recognition results can then be obtained by applying an efficient dynamic programming technique. To estimate the parameters of the proposed CRF model, we incorporate the structured support vector machine (SSVM) framework that can perform efficient structured learning by using large-scale datasets. Experimental results demonstrate that our method provides effective gesture recognition results for challenging real gesture datasets. By scoring 0.8563 in the mean Jaccard index, our method has obtained the state-of-the-art results for the gesture recognition track of the 2014 ChaLearn Looking at People (LAP) Challenge.

  16. Nonparametric Feature Matching Based Conditional Random Fields for Gesture Recognition from Multi-Modal Video.

    PubMed

    Chang, Ju Yong

    2016-08-01

    We present a new gesture recognition method that is based on the conditional random field (CRF) model using multiple feature matching. Our approach solves the labeling problem, determining gesture categories and their temporal ranges at the same time. A generative probabilistic model is formalized and probability densities are nonparametrically estimated by matching input features with a training dataset. In addition to the conventional skeletal joint-based features, the appearance information near the active hand in an RGB image is exploited to capture the detailed motion of fingers. The estimated likelihood function is then used as the unary term for our CRF model. The smoothness term is also incorporated to enforce the temporal coherence of our solution. Frame-wise recognition results can then be obtained by applying an efficient dynamic programming technique. To estimate the parameters of the proposed CRF model, we incorporate the structured support vector machine (SSVM) framework that can perform efficient structured learning by using large-scale datasets. Experimental results demonstrate that our method provides effective gesture recognition results for challenging real gesture datasets. By scoring 0.8563 in the mean Jaccard index, our method has obtained the state-of-the-art results for the gesture recognition track of the 2014 ChaLearn Looking at People (LAP) Challenge. PMID:26800528

  17. Polygon cluster pattern recognition based on new visual distance

    NASA Astrophysics Data System (ADS)

    Shuai, Yun; Shuai, Haiyan; Ni, Lin

    2007-06-01

    The pattern recognition of polygon clusters is a most attention-getting problem in spatial data mining. The paper carries through a research on this problem, based on spatial cognition principle and visual recognition Gestalt principle combining with spatial clustering method, and creates two innovations: First, the paper carries through a great improvement to the concept---"visual distance". In the definition of this concept, not only are Euclid's Distance, orientation difference and dimension discrepancy comprehensively thought out, but also is "similarity degree of object shape" crucially considered. In the calculation of "visual distance", the distance calculation model is built using Delaunay Triangulation geometrical structure. Second, the research adopts spatial clustering analysis based on MST Tree. In the design of pruning algorithm, the study initiates data automatism delamination mechanism and introduces Simulated Annealing Optimization Algorithm. This study provides a new research thread for GIS development, namely, GIS is an intersection principle, whose research method should be open and diverse. Any mature technology of other relative principles can be introduced into the study of GIS, but, they need to be improved on technical measures according to the principles of GIS as "spatial cognition science". Only to do this, can GIS develop forward on a higher and stronger plane.

  18. Wavelet-based moment invariants for pattern recognition

    NASA Astrophysics Data System (ADS)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  19. Handwritten character recognition based on hybrid neural networks

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Sun, Guangmin; Zhang, Xinming

    2001-09-01

    A hybrid neural network system for the recognition of handwritten character using SOFM,BP and Fuzzy network is presented. The horizontal and vertical project of preprocessed character and 4_directional edge project are used as feature vectors. In order to improve the recognition effect, the GAT algorithm is applied. Through the hybrid neural network system, the recognition rate is improved visibly.

  20. The Roles of Spreading Activation and Retrieval Mode in Producing False Recognition in the DRM Paradigm

    ERIC Educational Resources Information Center

    Meade, Michelle L.; Watson, Jason M.; Balota, David A.; Roediger, Henry L., III

    2007-01-01

    The nature of persisting spreading activation from list presentation in eliciting false recognition in the Deese-Roediger-McDermott (DRM) paradigm was examined in two experiments. We compared the time course of semantic priming in the lexical decision task (LDT) and false alarms in speeded recognition under identical study and test conditions. The…

  1. A multi-environment dataset for activity of daily living recognition in video streams.

    PubMed

    Borreo, Alessandro; Onofri, Leonardo; Soda, Paolo

    2015-08-01

    Public datasets played a key role in the increasing level of interest that vision-based human action recognition has attracted in last years. While the production of such datasets has been influenced by the variability introduced by various actors performing the actions, the different modalities of interactions with the environment introduced by the variation of the scenes around the actors has been scarcely took into account. As a consequence, public datasets do not provide a proper test-bed for recognition algorithms that aim at achieving high accuracy, irrespective of the environment where actions are performed. This is all the more so, when systems are designed to recognize activities of daily living (ADL), which are characterized by a high level of human-environment interaction. For that reason, we present in this manuscript the MEA dataset, a new multi-environment ADL dataset, which permitted us to show how the change of scenario can affect the performances of state-of-the-art approaches for action recognition.

  2. Fast recognition of musical sounds based on timbre.

    PubMed

    Agus, Trevor R; Suied, Clara; Thorpe, Simon J; Pressnitzer, Daniel

    2012-05-01

    Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources. PMID:22559384

  3. A bacterial tyrosine phosphatase inhibits plant pattern recognition receptor activation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Perception of pathogen-associated molecular patterns (PAMPs) by surface-localised pattern-recognition receptors (PRRs) is a key component of plant innate immunity. Most known plant PRRs are receptor kinases and initiation of PAMP-triggered immunity (PTI) signalling requires phosphorylation of the PR...

  4. Infrared face recognition based on binary particle swarm optimization and SVM-wrapper model

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Infrared facial imaging, being light- independent, and not vulnerable to facial skin, expressions and posture, can avoid or limit the drawbacks of face recognition in visible light. Robust feature selection and representation is a key issue for infrared face recognition research. This paper proposes a novel infrared face recognition method based on local binary pattern (LBP). LBP can improve the robust of infrared face recognition under different environment situations. How to make full use of the discriminant ability in LBP patterns is an important problem. A search algorithm combination binary particle swarm with SVM is used to find out the best discriminative subset in LBP features. Experimental results show that the proposed method outperforms traditional LBP based infrared face recognition methods. It can significantly improve the recognition performance of infrared face recognition.

  5. Step detection and activity recognition accuracy of seven physical activity monitors.

    PubMed

    Storm, Fabio A; Heller, Ben W; Mazzà, Claudia

    2015-01-01

    The aim of this study was to compare the seven following commercially available activity monitors in terms of step count detection accuracy: Movemonitor (Mc Roberts), Up (Jawbone), One (Fitbit), ActivPAL (PAL Technologies Ltd.), Nike+ Fuelband (Nike Inc.), Tractivity (Kineteks Corp.) and Sensewear Armband Mini (Bodymedia). Sixteen healthy adults consented to take part in the study. The experimental protocol included walking along an indoor straight walkway, descending and ascending 24 steps, free outdoor walking and free indoor walking. These tasks were repeated at three self-selected walking speeds. Angular velocity signals collected at both shanks using two wireless inertial measurement units (OPAL, ADPM Inc) were used as a reference for the step count, computed using previously validated algorithms. Step detection accuracy was assessed using the mean absolute percentage error computed for each sensor. The Movemonitor and the ActivPAL were also tested within a nine-minute activity recognition protocol, during which the participants performed a set of complex tasks. Posture classifications were obtained from the two monitors and expressed as a percentage of the total task duration. The Movemonitor, One, ActivPAL, Nike+ Fuelband and Sensewear Armband Mini underestimated the number of steps in all the observed walking speeds, whereas the Tractivity significantly overestimated step count. The Movemonitor was the best performing sensor, with an error lower than 2% at all speeds and the smallest error obtained in the outdoor walking. The activity recognition protocol showed that the Movemonitor performed best in the walking recognition, but had difficulty in discriminating between standing and sitting. Results of this study can be used to inform choice of a monitor for specific applications. PMID:25789630

  6. Step Detection and Activity Recognition Accuracy of Seven Physical Activity Monitors

    PubMed Central

    Storm, Fabio A.; Heller, Ben W.; Mazzà, Claudia

    2015-01-01

    The aim of this study was to compare the seven following commercially available activity monitors in terms of step count detection accuracy: Movemonitor (Mc Roberts), Up (Jawbone), One (Fitbit), ActivPAL (PAL Technologies Ltd.), Nike+ Fuelband (Nike Inc.), Tractivity (Kineteks Corp.) and Sensewear Armband Mini (Bodymedia). Sixteen healthy adults consented to take part in the study. The experimental protocol included walking along an indoor straight walkway, descending and ascending 24 steps, free outdoor walking and free indoor walking. These tasks were repeated at three self-selected walking speeds. Angular velocity signals collected at both shanks using two wireless inertial measurement units (OPAL, ADPM Inc) were used as a reference for the step count, computed using previously validated algorithms. Step detection accuracy was assessed using the mean absolute percentage error computed for each sensor. The Movemonitor and the ActivPAL were also tested within a nine-minute activity recognition protocol, during which the participants performed a set of complex tasks. Posture classifications were obtained from the two monitors and expressed as a percentage of the total task duration. The Movemonitor, One, ActivPAL, Nike+ Fuelband and Sensewear Armband Mini underestimated the number of steps in all the observed walking speeds, whereas the Tractivity significantly overestimated step count. The Movemonitor was the best performing sensor, with an error lower than 2% at all speeds and the smallest error obtained in the outdoor walking. The activity recognition protocol showed that the Movemonitor performed best in the walking recognition, but had difficulty in discriminating between standing and sitting. Results of this study can be used to inform choice of a monitor for specific applications. PMID:25789630

  7. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  8. Business model for sensor-based fall recognition systems.

    PubMed

    Fachinger, Uwe; Schöpke, Birte

    2014-01-01

    AAL systems require, in addition to sophisticated and reliable technology, adequate business models for their launch and sustainable establishment. This paper presents the basic features of alternative business models for a sensor-based fall recognition system which was developed within the context of the "Lower Saxony Research Network Design of Environments for Ageing" (GAL). The models were developed parallel to the R&D process with successive adaptation and concretization. An overview of the basic features (i.e. nine partial models) of the business model is given and the mutual exclusive alternatives for each partial model are presented. The partial models are interconnected and the combinations of compatible alternatives lead to consistent alternative business models. However, in the current state, only initial concepts of alternative business models can be deduced. The next step will be to gather additional information to work out more detailed models.

  9. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  10. Dynamic detection of window starting positions and its implementation within an activity recognition framework.

    PubMed

    Ni, Qin; Patterson, Timothy; Cleland, Ian; Nugent, Chris

    2016-08-01

    Activity recognition is an intrinsic component of many pervasive computing and ambient intelligent solutions. This has been facilitated by an explosion of technological developments in the area of wireless sensor network, wearable and mobile computing. Yet, delivering robust activity recognition, which could be deployed at scale in a real world environment, still remains an active research challenge. Much of the existing literature to date has focused on applying machine learning techniques to pre-segmented data collected in controlled laboratory environments. Whilst this approach can provide valuable ground truth information from which to build recognition models, these techniques often do not function well when implemented in near real time applications. This paper presents the application of a multivariate online change detection algorithm to dynamically detect the starting position of windows for the purposes of activity recognition. PMID:27392647

  11. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results. PMID:27656199

  12. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    PubMed Central

    2016-01-01

    Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW) model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm's projective function. We test our work on the several datasets and obtain very promising results.

  13. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers. PMID:19258199

  14. All-optical multibit address recognition at 20 Gb/s based on TOAD

    NASA Astrophysics Data System (ADS)

    Yan, Yumei; Wu, Jian; Lin, Jintong

    2005-04-01

    All-optical multibit address recognition at 20 Gb/s is demonstrated based on a special AND logic of terahertz optical asymmetric demultiplexer (TOAD). The semiconductor optical amplifier (SOA) used in the TOAD is biased at transparency status to accelerate the gain recovery. This is the highest bit rate that multibit address recognition is demonstrated with SOA-based interferometer. The experimental results show low pattern dependency. With this method, address recognition can be performed without separating address and payload beforehand.

  15. Gender-Based Prototype Formation in Face Recognition

    ERIC Educational Resources Information Center

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  16. Hippocampal Activation of Rac1 Regulates the Forgetting of Object Recognition Memory.

    PubMed

    Liu, Yunlong; Du, Shuwen; Lv, Li; Lei, Bo; Shi, Wei; Tang, Yikai; Wang, Lianzhang; Zhong, Yi

    2016-09-12

    Forgetting is a universal feature for most types of memories. The best-defined and extensively characterized behaviors that depict forgetting are natural memory decay and interference-based forgetting [1, 2]. Molecular mechanisms underlying the active forgetting remain to be determined for memories in vertebrates. Recent progress has begun to unravel such mechanisms underlying the active forgetting [3-11] that is induced through the behavior-dependent activation of intracellular signaling pathways. In Drosophila, training-induced activation of the small G protein Rac1 mediates natural memory decay and interference-based forgetting of aversive conditioning memory [3]. In mice, the activation of photoactivable-Rac1 in recently potentiated spines in a motor learning task erases the motor memory [12]. These lines of evidence prompted us to investigate a role for Rac1 in time-based natural memory decay and interference-based forgetting in mice. The inhibition of Rac1 activity in hippocampal neurons through targeted expression of a dominant-negative Rac1 form extended object recognition memory from less than 72 hr to over 72 hr, whereas Rac1 activation accelerated memory decay within 24 hr. Interference-induced forgetting of this memory was correlated with Rac1 activation and was completely blocked by inhibition of Rac1 activity. Electrophysiological recordings of long-term potentiation provided independent evidence that further supported a role for Rac1 activation in forgetting. Thus, Rac1-dependent forgetting is evolutionarily conserved from invertebrates to vertebrates.

  17. Hippocampal Activation of Rac1 Regulates the Forgetting of Object Recognition Memory.

    PubMed

    Liu, Yunlong; Du, Shuwen; Lv, Li; Lei, Bo; Shi, Wei; Tang, Yikai; Wang, Lianzhang; Zhong, Yi

    2016-09-12

    Forgetting is a universal feature for most types of memories. The best-defined and extensively characterized behaviors that depict forgetting are natural memory decay and interference-based forgetting [1, 2]. Molecular mechanisms underlying the active forgetting remain to be determined for memories in vertebrates. Recent progress has begun to unravel such mechanisms underlying the active forgetting [3-11] that is induced through the behavior-dependent activation of intracellular signaling pathways. In Drosophila, training-induced activation of the small G protein Rac1 mediates natural memory decay and interference-based forgetting of aversive conditioning memory [3]. In mice, the activation of photoactivable-Rac1 in recently potentiated spines in a motor learning task erases the motor memory [12]. These lines of evidence prompted us to investigate a role for Rac1 in time-based natural memory decay and interference-based forgetting in mice. The inhibition of Rac1 activity in hippocampal neurons through targeted expression of a dominant-negative Rac1 form extended object recognition memory from less than 72 hr to over 72 hr, whereas Rac1 activation accelerated memory decay within 24 hr. Interference-induced forgetting of this memory was correlated with Rac1 activation and was completely blocked by inhibition of Rac1 activity. Electrophysiological recordings of long-term potentiation provided independent evidence that further supported a role for Rac1 activation in forgetting. Thus, Rac1-dependent forgetting is evolutionarily conserved from invertebrates to vertebrates. PMID:27593377

  18. Neural activity during emotion recognition after combined cognitive plus social cognitive training in schizophrenia.

    PubMed

    Hooker, Christine I; Bruce, Lori; Fisher, Melissa; Verosky, Sara C; Miyakawa, Asako; Vinogradov, Sophia

    2012-08-01

    Cognitive remediation training has been shown to improve both cognitive and social cognitive deficits in people with schizophrenia, but the mechanisms that support this behavioral improvement are largely unknown. One hypothesis is that intensive behavioral training in cognition and/or social cognition restores the underlying neural mechanisms that support targeted skills. However, there is little research on the neural effects of cognitive remediation training. This study investigated whether a 50 h (10-week) remediation intervention which included both cognitive and social cognitive training would influence neural function in regions that support social cognition. Twenty-two stable, outpatient schizophrenia participants were randomized to a treatment condition consisting of auditory-based cognitive training (AT) [Brain Fitness Program/auditory module ~60 min/day] plus social cognition training (SCT) which was focused on emotion recognition [~5-15 min per day] or a placebo condition of non-specific computer games (CG) for an equal amount of time. Pre and post intervention assessments included an fMRI task of positive and negative facial emotion recognition, and standard behavioral assessments of cognition, emotion processing, and functional outcome. There were no significant intervention-related improvements in general cognition or functional outcome. fMRI results showed the predicted group-by-time interaction. Specifically, in comparison to CG, AT+SCT participants had a greater pre-to-post intervention increase in postcentral gyrus activity during emotion recognition of both positive and negative emotions. Furthermore, among all participants, the increase in postcentral gyrus activity predicted behavioral improvement on a standardized test of emotion processing (MSCEIT: Perceiving Emotions). Results indicate that combined cognition and social cognition training impacts neural mechanisms that support social cognition skills. PMID:22695257

  19. Evidence for altered amygdala activation in schizophrenia in an adaptive emotion recognition task.

    PubMed

    Mier, Daniela; Lis, Stefanie; Zygrodnik, Karina; Sauer, Carina; Ulferts, Jens; Gallhofer, Bernd; Kirsch, Peter

    2014-03-30

    Deficits in social cognition seem to present an intermediate phenotype for schizophrenia, and are known to be associated with an altered amygdala response to faces. However, current results are heterogeneous with respect to whether this altered amygdala response in schizophrenia is hypoactive or hyperactive in nature. The present study used functional magnetic resonance imaging to investigate emotion-specific amygdala activation in schizophrenia using a novel adaptive emotion recognition paradigm. Participants comprised 11 schizophrenia outpatients and 16 healthy controls who viewed face stimuli expressing emotions of anger, fear, happiness, and disgust, as well as neutral expressions. The adaptive emotion recognition approach allows the assessment of group differences in both emotion recognition performance and associated neuronal activity while also ensuring a comparable number of correctly recognized emotions between groups. Schizophrenia participants were slower and had a negative bias in emotion recognition. In addition, they showed reduced differential activation during recognition of emotional compared with neutral expressions. Correlation analyses revealed an association of a negative bias with amygdala activation for neutral facial expressions that was specific to the patient group. We replicated previous findings of affected emotion recognition in schizophrenia. Furthermore, we demonstrated that altered amygdala activation in the patient group was associated with the occurrence of a negative bias. These results provide further evidence for impaired social cognition in schizophrenia and point to a central role of the amygdala in negative misperceptions of facial stimuli in schizophrenia.

  20. Design and Test of a Hybrid Foot Force Sensing and GPS System for Richer User Mobility Activity Recognition

    PubMed Central

    Zhang, Zelun; Poslad, Stefan

    2013-01-01

    Wearable and accompanied sensors and devices are increasingly being used for user activity recognition. However, typical GPS-based and accelerometer-based (ACC) methods face three main challenges: a low recognition accuracy; a coarse recognition capability, i.e., they cannot recognise both human posture (during travelling) and transportation mode simultaneously, and a relatively high computational complexity. Here, a new GPS and Foot-Force (GPS + FF) sensor method is proposed to overcome these challenges that leverages a set of wearable FF sensors in combination with GPS, e.g., in a mobile phone. User mobility activities that can be recognised include both daily user postures and common transportation modes: sitting, standing, walking, cycling, bus passenger, car passenger (including private cars and taxis) and car driver. The novelty of this work is that our approach provides a more comprehensive recognition capability in terms of reliably recognising both human posture and transportation mode simultaneously during travel. In addition, by comparing the new GPS + FF method with both an ACC method (62% accuracy) and a GPS + ACC based method (70% accuracy) as baseline methods, it obtains a higher accuracy (95%) with less computational complexity, when tested on a dataset obtained from ten individuals. PMID:24189333

  1. Design and test of a hybrid foot force sensing and GPS system for richer user mobility activity recognition.

    PubMed

    Zhang, Zelun; Poslad, Stefan

    2013-11-01

    Wearable and accompanied sensors and devices are increasingly being used for user activity recognition. However, typical GPS-based and accelerometer-based (ACC) methods face three main challenges: a low recognition accuracy; a coarse recognition capability, i.e., they cannot recognise both human posture (during travelling) and transportation mode simultaneously, and a relatively high computational complexity. Here, a new GPS and Foot-Force (GPS + FF) sensor method is proposed to overcome these challenges that leverages a set of wearable FF sensors in combination with GPS, e.g., in a mobile phone. User mobility activities that can be recognised include both daily user postures and common transportation modes: sitting, standing, walking, cycling, bus passenger, car passenger (including private cars and taxis) and car driver. The novelty of this work is that our approach provides a more comprehensive recognition capability in terms of reliably recognising both human posture and transportation mode simultaneously during travel. In addition, by comparing the new GPS + FF method with both an ACC method (62% accuracy) and a GPS + ACC based method (70% accuracy) as baseline methods, it obtains a higher accuracy (95%) with less computational complexity, when tested on a dataset obtained from ten individuals.

  2. Poka Yoke system based on image analysis and object recognition

    NASA Astrophysics Data System (ADS)

    Belu, N.; Ionescu, L. M.; Misztal, A.; Mazăre, A.

    2015-11-01

    Poka Yoke is a method of quality management which is related to prevent faults from arising during production processes. It deals with “fail-sating” or “mistake-proofing”. The Poka-yoke concept was generated and developed by Shigeo Shingo for the Toyota Production System. Poka Yoke is used in many fields, especially in monitoring production processes. In many cases, identifying faults in a production process involves a higher cost than necessary cost of disposal. Usually, poke yoke solutions are based on multiple sensors that identify some nonconformities. This means the presence of different equipment (mechanical, electronic) on production line. As a consequence, coupled with the fact that the method itself is an invasive, affecting the production process, would increase its price diagnostics. The bulky machines are the means by which a Poka Yoke system can be implemented become more sophisticated. In this paper we propose a solution for the Poka Yoke system based on image analysis and identification of faults. The solution consists of a module for image acquisition, mid-level processing and an object recognition module using associative memory (Hopfield network type). All are integrated into an embedded system with AD (Analog to Digital) converter and Zync 7000 (22 nm technology).

  3. Finger vein recognition based on the hyperinformation feature

    NASA Astrophysics Data System (ADS)

    Xi, Xiaoming; Yang, Gongping; Yin, Yilong; Yang, Lu

    2014-01-01

    The finger vein is a promising biometric pattern for personal identification due to its advantages over other existing biometrics. In finger vein recognition, feature extraction is a critical step, and many feature extraction methods have been proposed to extract the gray, texture, or shape of the finger vein. We treat them as low-level features and present a high-level feature extraction framework. Under this framework, base attribute is first defined to represent the characteristics of a certain subcategory of a subject. Then, for an image, the correlation coefficient is used for constructing the high-level feature, which reflects the correlation between this image and all base attributes. Since the high-level feature can reveal characteristics of more subcategories and contain more discriminative information, we call it hyperinformation feature (HIF). Compared with low-level features, which only represent the characteristics of one subcategory, HIF is more powerful and robust. In order to demonstrate the potential of the proposed framework, we provide a case study to extract HIF. We conduct comprehensive experiments to show the generality of the proposed framework and the efficiency of HIF on our databases, respectively. Experimental results show that HIF significantly outperforms the low-level features.

  4. Orthographic Activation in L2 Spoken Word Recognition Depends on Proficiency: Evidence from Eye-Tracking.

    PubMed

    Veivo, Outi; Järvikivi, Juhani; Porretta, Vincent; Hyönä, Jukka

    2016-01-01

    The use of orthographic and phonological information in spoken word recognition was studied in a visual world task where L1 Finnish learners of L2 French (n = 64) and L1 French native speakers (n = 24) were asked to match spoken word forms with printed words while their eye movements were recorded. In Experiment 1, French target words were contrasted with competitors having a longer (<base> vs. ) or a shorter word initial phonological overlap (<base> vs. ) and an identical orthographic overlap. In Experiment 2, target words were contrasted with competitors of either longer ( vs. ) or shorter word initial orthographic overlap ( vs. ) and of an identical phonological overlap. A general phonological effect was observed in the L2 listener group but not in the L1 control group. No general orthographic effects were observed in the L2 or L1 groups, but a significant effect of proficiency was observed for orthographic overlap over time: higher proficiency L2 listeners used also orthographic information in the matching task in a time-window from 400 to 700 ms, whereas no such effect was observed for lower proficiency listeners. These results suggest that the activation of orthographic information in L2 spoken word recognition depends on proficiency in L2. PMID:27512381

  5. Orthographic Activation in L2 Spoken Word Recognition Depends on Proficiency: Evidence from Eye-Tracking

    PubMed Central

    Veivo, Outi; Järvikivi, Juhani; Porretta, Vincent; Hyönä, Jukka

    2016-01-01

    The use of orthographic and phonological information in spoken word recognition was studied in a visual world task where L1 Finnish learners of L2 French (n = 64) and L1 French native speakers (n = 24) were asked to match spoken word forms with printed words while their eye movements were recorded. In Experiment 1, French target words were contrasted with competitors having a longer (<base> vs. ) or a shorter word initial phonological overlap (<base> vs. ) and an identical orthographic overlap. In Experiment 2, target words were contrasted with competitors of either longer ( vs. ) or shorter word initial orthographic overlap ( vs. ) and of an identical phonological overlap. A general phonological effect was observed in the L2 listener group but not in the L1 control group. No general orthographic effects were observed in the L2 or L1 groups, but a significant effect of proficiency was observed for orthographic overlap over time: higher proficiency L2 listeners used also orthographic information in the matching task in a time-window from 400 to 700 ms, whereas no such effect was observed for lower proficiency listeners. These results suggest that the activation of orthographic information in L2 spoken word recognition depends on proficiency in L2. PMID:27512381

  6. The Relative Success of Recognition-Based Inference in Multichoice Decisions

    ERIC Educational Resources Information Center

    McCloy, Rachel; Beaman, C. Philip; Smith, Philip T.

    2008-01-01

    The utility of an "ecologically rational" recognition-based decision rule in multichoice decision problems is analyzed, varying the type of judgment required (greater or lesser). The maximum size and range of a counterintuitive advantage associated with recognition-based judgment (the "less-is-more effect") is identified for a range of cue…

  7. Changes in brain electrical activity during extended continuous word recognition.

    PubMed

    Van Strien, Jan W; Hagenbeek, Rogier E; Stam, Cornelis J; Rombouts, Serge A R B; Barkhof, Frederik

    2005-07-01

    Twenty healthy subjects (10 men, 10 women) participated in an EEG study with an extended continuous recognition memory task, in which each of 30 words was randomly shown 10 times and subjects were required to make old vs. new decisions. Both event-related brain potentials (ERPs) and induced band power (IBP) were investigated. We hypothesized that repeated presentations affect recollection rather than familiarity. For the 300- to 500-ms time window, an 'old/new' ERP effect was found for the first vs. second word presentations. The correct recognition of an 'old' word was associated with a more positive waveform than the correct identification of a new word. The old/new effect was most pronounced at and around the midline parietal electrode position. For the 500- to 800-ms time window, a linear repetition effect was found for multiple word repetitions. Correct recognition after an increasing number of repetitions was associated with increasing positivity. The multiple repetitions effect was most pronounced at the midline central (Cz) and fronto-central (FCz) electrode positions and reflects a graded recollection process: the stronger the memory trace grows, the more positive the ERP in the 500- to 800-ms time window. The ERP results support a dual-processing model, with familiarity being discernable from a more graded recollection state that depends on memory strengths. For IBP, we found 'old/new' effects for the lower-2 alpha, theta, and delta bands, with higher bandpower during 'old' words. The lower-2 alpha 'old/new' effect most probably reflects attentional processes, whereas the theta and delta effects reflect encoding and retrieval processes. Upon repeated word presentations, the magnitude of induced delta power in the 375- to 750-ms time window diminished linearly. Correlation analysis suggests that decreased delta power is moderately associated with faster decision speed and higher accuracy.

  8. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  9. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  10. Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish.

    PubMed

    Vieira, Manuel; Fonseca, Paulo J; Amorim, M Clara P; Teixeira, Carlos J C

    2015-12-01

    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types. PMID:26723348

  11. Human activities recognition with RGB-Depth camera using HMM.

    PubMed

    Dubois, Amandine; Charpillet, François

    2013-01-01

    Fall detection remains today an open issue for improving elderly people security. It is all the more pertinent today when more and more elderly people stay longer and longer at home. In this paper, we propose a method to detect fall using a system made up of RGB-Depth cameras. The major benefit of our approach is its low cost and the fact that the system is easy to distribute and install. In few words, the method is based on the detection in real time of the center of mass of any mobile object or person accurately determining its position in the 3D space and its velocity. We demonstrate in this paper that this information is adequate and robust enough for labeling the activity of a person among 8 possible situations. An evaluation has been conducted within a real smart environment with 26 subjects which were performing any of the eight activities (sitting, walking, going up, squatting, lying on a couch, falling, bending and lying down). Seven out of these eight activities were correctly detected among which falling which was detected without false positives.

  12. Recognition of military-specific physical activities with body-fixed sensors.

    PubMed

    Wyss, Thomas; Mäder, Urs

    2010-11-01

    The purpose of this study was to develop and validate an algorithm for recognizing military-specific, physically demanding activities using body-fixed sensors. To develop the algorithm, the first group of study participants (n = 15) wore body-fixed sensors capable of measuring acceleration, step frequency, and heart rate while completing six military-specific activities: walking, marching with backpack, lifting and lowering loads, lifting and carrying loads, digging, and running. The accuracy of the algorithm was tested in these isolated activities in a laboratory setting (n = 18) and in the context of daily military training routine (n = 24). The overall recognition rates during isolated activities and during daily military routine activities were 87.5% and 85.5%, respectively. We conclude that the algorithm adequately recognized six military-specific physical activities based on sensor data alone both in a laboratory setting and in the military training environment. By recognizing type of physical activities this objective method provides additional information on military-job descriptions. PMID:21121495

  13. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  14. A context-based approach to text recognition

    SciTech Connect

    Rose, T.G.; Evett, L.J.; Jobbins, A.C.

    1994-12-31

    The performance of text recognition systems may be improved by applying higher-level knowledge in the form of contextual information. However, the acquisition of such information for a realistically sized vocabulary presents a major problem, since hand-coding is feasible for only the smallest of vocabularies. This paper describes a number of methods for extracting contextual knowledge from text corpora, and compares the effect of each on the performance of text recognition systems.

  15. Zernike moments features for shape-based gait recognition

    NASA Astrophysics Data System (ADS)

    Qin, Huanfeng; Qin, Lan; Liu, Jun; Chao, Jiang

    2011-12-01

    The paper proposes a new spatio-temporal gait representation, called cycles gait Zernike moments (CGZM), to characterize human walking properties for individual recognition. Firstly, Zernike moments as shape descriptors are used to characterize gait silhouette shape. Secondly, we generate CGZM from Zernike moments of silhouette sequences. Finally, the phase and magnitude coefficientsof CGZM are utilized to perform classification by the modified Hausdorff distance (MHD) classifier. Experimental results show that the proposed approach have an encouraging recognition performance.

  16. Text vectorization based on character recognition and character stroke modeling

    NASA Astrophysics Data System (ADS)

    Fan, Zhigang; Zhou, Bingfeng; Tse, Francis; Mu, Yadong; He, Tao

    2014-03-01

    In this paper, a text vectorization method is proposed using OCR (Optical Character Recognition) and character stroke modeling. This is based on the observation that for a particular character, its font glyphs may have different shapes, but often share same stroke structures. Like many other methods, the proposed algorithm contains two procedures, dominant point determination and data fitting. The first one partitions the outlines into segments and second one fits a curve to each segment. In the proposed method, the dominant points are classified as "major" (specifying stroke structures) and "minor" (specifying serif shapes). A set of rules (parameters) are determined offline specifying for each character the number of major and minor dominant points and for each dominant point the detection and fitting parameters (projection directions, boundary conditions and smoothness). For minor points, multiple sets of parameters could be used for different fonts. During operation, OCR is performed and the parameters associated with the recognized character are selected. Both major and minor dominant points are detected as a maximization process as specified by the parameter set. For minor points, an additional step could be performed to test the competing hypothesis and detect degenerated cases.

  17. Recognition of chemical entities: combining dictionary-based and grammar-based approaches

    PubMed Central

    2015-01-01

    Background The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. Results The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. Conclusions We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named

  18. Recent advances in molecular recognition based on nanoengineered platforms.

    PubMed

    Mu, Bin; Zhang, Jingqing; McNicholas, Thomas P; Reuel, Nigel F; Kruss, Sebastian; Strano, Michael S

    2014-04-15

    they are able to obtain loading curves similar to surface plasmon resonance measurements. They demonstrate the sensitivity and specificity of this platform with two higher-affined glycan-lectin pairs: fucose (Fuc) to PA-IIL and N-acetylglucosamine (GlcNAc) to GafD. Lastly, we discuss how developments in protein biomarker detection in general are benefiting specifically from label-free molecular recognition. Electrical field effect transistors, chemi-resistive and fluorometric nanosensors based on various nanomaterials have demonstrated substantial progress in recent years in addressing this challenging problem. In this Account, we compare the balance between sensitivity, selectivity, and nonspecific adsorption for various applications. In particular, our group has utilized SWNTs as fluorescence sensors for label-free protein-protein interaction measurements. In this assay, we have encapsulated each nanotube in a biocompatible polymer, chitosan, which has been further modified to conjugate nitrilotriacetic acid (NTA) groups. After Ni(2+) chelation, NTA Ni(2+) complexes bind to his-tagged proteins, resulting in a local environment change of the SWNT array, leading to optical fluorescence modulation with detection limit down to 100 nM. We have further engineered the platform to monitor single protein binding events, with an even lower detection limit down to 10 pM.

  19. Electrocorticography reveals the temporal dynamics of posterior parietal cortical activity during recognition memory decisions.

    PubMed

    Gonzalez, Alex; Hutchinson, J Benjamin; Uncapher, Melina R; Chen, Janice; LaRocque, Karen F; Foster, Brett L; Rangarajan, Vinitha; Parvizi, Josef; Wagner, Anthony D

    2015-09-01

    Theories of the neurobiology of episodic memory predominantly focus on the contributions of medial temporal lobe structures, based on extensive lesion, electrophysiological, and imaging evidence. Against this backdrop, functional neuroimaging data have unexpectedly implicated left posterior parietal cortex (PPC) in episodic retrieval, revealing distinct activation patterns in PPC subregions as humans make memory-related decisions. To date, theorizing about the functional contributions of PPC has been hampered by the absence of information about the temporal dynamics of PPC activity as retrieval unfolds. Here, we leveraged electrocorticography to examine the temporal profile of high gamma power (HGP) in dorsal PPC subregions as participants made old/new recognition memory decisions. A double dissociation in memory-related HGP was observed, with activity in left intraparietal sulcus (IPS) and left superior parietal lobule (SPL) differing in time and sign for recognized old items (Hits) and correctly rejected novel items (CRs). Specifically, HGP in left IPS increased for Hits 300-700 ms poststimulus onset, and decayed to baseline ∼200 ms preresponse. By contrast, HGP in left SPL increased for CRs early after stimulus onset (200-300 ms) and late in the memory decision (from 700 ms to response). These memory-related effects were unique to left PPC, as they were not observed in right PPC. Finally, memory-related HGP in left IPS and SPL was sufficiently reliable to enable brain-based decoding of the participant's memory state at the single-trial level, using multivariate pattern classification. Collectively, these data provide insights into left PPC temporal dynamics as humans make recognition memory decisions. PMID:26283375

  20. Recognition-Domain Focused (RDF) Chemosensors: Versatile and Efficient Reporters of Protein Kinase Activity

    PubMed Central

    Luković, Elvedin; González-Vera, Juan A.; Imperiali, Barbara

    2009-01-01

    Catalyzed by kinases, serine/threonine and tyrosine phosphorylation is a vital mechanism of intracellular regulation. Thus, assays that easily monitor kinase activity are critical in both academic and pharmaceutical settings. We previously developed sulfonamido-oxine (Sox)-based fluorescent peptides following a β-turn focused (BTF) design for the continuous assay of kinase activity in vitro and in cell lysates. Upon phosphorylation of the Sox-containing peptide, the chromophore binds Mg2+ and undergoes chelation-enhanced fluorescence (CHEF). While the design was applied successfully to the development of several kinase sensors, an intrinsic limitation was that only residues C- or N-terminal to the phosphorylated residue could be used to derive specificity for the target kinase. To address this limitation, a new, recognition-domain focused (RDF) strategy has been developed that also relies on CHEF. In this approach, the requirement for the constrained β-turn motif is obviated by alkylation of a cysteine residue with a Sox-based derivative to afford an amino acid termed C-Sox. The RDF design allows inclusion of extended binding determinants to maximize recognition by the cognate kinase, which has now permitted the construction of chemosensors for a variety of representative Ser/Thr (PKCα, PKCβI, PKCδ, Pim2, Akt1, MK2 and PKA), as well as receptor (IRK) and non-receptor (Src, Abl) Tyr kinases with greatly enhanced selectivity. The new sensors have up to 28-fold improved catalytic efficiency and up to 66-fold lower KM when compared to the corresponding BTF probes. The improved generality of the strategy is exemplified with the synthesis and analysis of Sox-based probes for PKCβI and PKCδ, which were previously unattainable using the BTF approach. PMID:18759402

  1. Ambient temperature normalization for infrared face recognition based on the second-order polynomial model

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi

    2015-08-01

    The influence of ambient temperature is a big challenge to robust infrared face recognition. This paper proposes a new ambient temperature normalization algorithm to improve the performance of infrared face recognition under variable ambient temperatures. Based on statistical regression theory, a second order polynomial model is learned to describe the ambient temperature's impact on infrared face image. Then, infrared image was normalized to reference ambient temperature by the second order polynomial model. Finally, this normalization method is applied to infrared face recognition to verify its efficiency. The experiments demonstrate that the proposed temperature normalization method is feasible and can significantly improve the robustness of infrared face recognition.

  2. Mechanistic insights into metal ion activation and operator recognition by the ferric uptake regulator

    NASA Astrophysics Data System (ADS)

    Deng, Zengqin; Wang, Qing; Liu, Zhao; Zhang, Manfeng; Machado, Ana Carolina Dantas; Chiu, Tsu-Pei; Feng, Chong; Zhang, Qi; Yu, Lin; Qi, Lei; Zheng, Jiangge; Wang, Xu; Huo, Xinmei; Qi, Xiaoxuan; Li, Xiaorong; Wu, Wei; Rohs, Remo; Li, Ying; Chen, Zhongzhou

    2015-07-01

    Ferric uptake regulator (Fur) plays a key role in the iron homeostasis of prokaryotes, such as bacterial pathogens, but the molecular mechanisms and structural basis of Fur-DNA binding remain incompletely understood. Here, we report high-resolution structures of Magnetospirillum gryphiswaldense MSR-1 Fur in four different states: apo-Fur, holo-Fur, the Fur-feoAB1 operator complex and the Fur-Pseudomonas aeruginosa Fur box complex. Apo-Fur is a transition metal ion-independent dimer whose binding induces profound conformational changes and confers DNA-binding ability. Structural characterization, mutagenesis, biochemistry and in vivo data reveal that Fur recognizes DNA by using a combination of base readout through direct contacts in the major groove and shape readout through recognition of the minor-groove electrostatic potential by lysine. The resulting conformational plasticity enables Fur binding to diverse substrates. Our results provide insights into metal ion activation and substrate recognition by Fur that suggest pathways to engineer magnetotactic bacteria and antipathogenic drugs.

  3. Mechanistic insights into metal ion activation and operator recognition by the ferric uptake regulator

    PubMed Central

    Deng, Zengqin; Wang, Qing; Liu, Zhao; Zhang, Manfeng; Machado, Ana Carolina Dantas; Chiu, Tsu-Pei; Feng, Chong; Zhang, Qi; Yu, Lin; Qi, Lei; Zheng, Jiangge; Wang, Xu; Huo, XinMei; Qi, Xiaoxuan; Li, Xiaorong; Wu, Wei; Rohs, Remo; Li, Ying; Chen, Zhongzhou

    2015-01-01

    Ferric uptake regulator (Fur) plays a key role in the iron homeostasis of prokaryotes, such as bacterial pathogens, but the molecular mechanisms and structural basis of Fur–DNA binding remain incompletely understood. Here, we report high-resolution structures of Magnetospirillum gryphiswaldense MSR-1 Fur in four different states: apo-Fur, holo-Fur, the Fur–feoAB1 operator complex and the Fur–Pseudomonas aeruginosa Fur box complex. Apo-Fur is a transition metal ion-independent dimer whose binding induces profound conformational changes and confers DNA-binding ability. Structural characterization, mutagenesis, biochemistry and in vivo data reveal that Fur recognizes DNA by using a combination of base readout through direct contacts in the major groove and shape readout through recognition of the minor-groove electrostatic potential by lysine. The resulting conformational plasticity enables Fur binding to diverse substrates. Our results provide insights into metal ion activation and substrate recognition by Fur that suggest pathways to engineer magnetotactic bacteria and antipathogenic drugs. PMID:26134419

  4. Influence of music with different volumes and styles on recognition activity in humans.

    PubMed

    Pavlygina, R A; Sakharov, D S; Davydov, V I; Avdonkin, A V

    2010-10-01

    The efficiency of the recognition of masked visual images (Arabic numerals) increased when accompanied by classical (62 dB) and rock music (25 dB). These changes were accompanied by increases in the coherence of potentials in the frontal areas seen on recognition without music. Changes in intercenter EEG relationships correlated with the formation a dominant at the behavioral level. When loud music (85 dB) and music of other styles was used, these changes in behavior and the EEG were not seen; however, the coherence of potentials in the temporal and motor cortex of the right hemisphere increased and the latent periods of motor reactions of the hands decreased. These results provide evidence that the "recognition" dominant is formed when there are particular ratios of the levels of excitation in the corresponding centers, which should be considered when there is a need to increase the efficiency of recognition activity in humans.

  5. Computer-Based Voice Recognition: Characteristics, Applications, and Guidelines for Use.

    ERIC Educational Resources Information Center

    Milheim, William D.

    1993-01-01

    Describes computer-based voice recognition technology, including disadvantages; identifies vocabulary, training requirements, and ability to understand continuous speech as the basic characteristics of voice-recognition systems; describes applications in education and industry; suggests guidelines for design and implementation; and discusses…

  6. Effects of Bilateral Eye Movements on Gist Based False Recognition in the DRM Paradigm

    ERIC Educational Resources Information Center

    Parker, Andrew; Dagnall, Neil

    2007-01-01

    The effects of saccadic bilateral (horizontal) eye movements on gist based false recognition was investigated. Following exposure to lists of words related to a critical but non-studied word participants were asked to engage in 30s of bilateral vs. vertical vs. no eye movements. Subsequent testing of recognition memory revealed that those who…

  7. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send...

  8. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send...

  9. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send...

  10. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send...

  11. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send...

  12. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Adult Day Health Care in State Homes § 52.20 Application for recognition based on certification. To apply for recognition and certification of a State home for adult day health care, a...

  13. A study of speech emotion recognition based on hybrid algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Ju-xia; Zhang, Chao; Lv, Zhao; Rao, Yao-quan; Wu, Xiao-pei

    2011-10-01

    To effectively improve the recognition accuracy of the speech emotion recognition system, a hybrid algorithm which combines Continuous Hidden Markov Model (CHMM), All-Class-in-One Neural Network (ACON) and Support Vector Machine (SVM) is proposed. In SVM and ACON methods, some global statistics are used as emotional features, while in CHMM method, instantaneous features are employed. The recognition rate by the proposed method is 92.25%, with the rejection rate to be 0.78%. Furthermore, it obtains the relative increasing of 8.53%, 4.69% and 0.78% compared with ACON, CHMM and SVM methods respectively. The experiment result confirms the efficiency of distinguishing anger, happiness, neutral and sadness emotional states.

  14. Episodic Reasoning for Vision-Based Human Action Recognition

    PubMed Central

    Martinez-del-Rincon, Jesus

    2014-01-01

    Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning. PMID:24959602

  15. Face Image Gender Recognition Based on Gabor Transform and SVM

    NASA Astrophysics Data System (ADS)

    Yan, Chunjuan

    In order to overcome the disturbance of non-essential information such as illumination variations and facial expression changing, a new algorithm is proposed in this paper for face image gender recognition. That is, the 2-D Gabor transform is used for extracting the face features; a new method is put forwards to decrease dimensions of Gabor transform output for speeding up SVM training; finally gender recognition is accomplished with SVM classifier. Good performance of gender classification test is achieved on a relative large scale and low-resolution face database.

  16. Robust recognition of handwritten numerals based on dual cooperative network

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan; Choi, Yeongwoo

    1992-01-01

    An approach to robust recognition of handwritten numerals using two operating parallel networks is presented. The first network uses inputs in Cartesian coordinates, and the second network uses the same inputs transformed into polar coordinates. How the proposed approach realizes the robustness to local and global variations of input numerals by handling inputs both in Cartesian coordinates and in its transformed Polar coordinates is described. The required network structures and its learning scheme are discussed. Experimental results show that by tracking only a small number of distinctive features for each teaching numeral in each coordinate, the proposed system can provide robust recognition of handwritten numerals.

  17. Detection and recognition of analytes based on their crystallization patterns

    DOEpatents

    Morozov, Victor; Bailey, Charles L.; Vsevolodov, Nikolai N.; Elliott, Adam

    2008-05-06

    The invention contemplates a method for recognition of proteins and other biological molecules by imaging morphology, size and distribution of crystalline and amorphous dry residues in droplets (further referred to as "crystallization pattern") containing predetermined amount of certain crystal-forming organic compounds (reporters) to which protein to be analyzed is added. It has been shown that changes in the crystallization patterns of a number of amino-acids can be used as a "signature" of a protein added. It was also found that both the character of changer in the crystallization patter and the fact of such changes can be used as recognition elements in analysis of protein molecules.

  18. Subauditory Speech Recognition based on EMG/EPG Signals

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles; Lee, Diana Dee; Agabon, Shane; Lau, Sonie (Technical Monitor)

    2003-01-01

    Sub-vocal electromyogram/electro palatogram (EMG/EPG) signal classification is demonstrated as a method for silent speech recognition. Recorded electrode signals from the larynx and sublingual areas below the jaw are noise filtered and transformed into features using complex dual quad tree wavelet transforms. Feature sets for six sub-vocally pronounced words are trained using a trust region scaled conjugate gradient neural network. Real time signals for previously unseen patterns are classified into categories suitable for primitive control of graphic objects. Feature construction, recognition accuracy and an approach for extension of the technique to a variety of real world application areas are presented.

  19. The structure of sulfated polysaccharides ensures a carbohydrate-based mechanism for species recognition during sea urchin fertilization.

    PubMed

    Vilela-Silva, Ana-Cristina E S; Hirohashi, Noritaka; Mourão, Paulo A S

    2008-01-01

    The evolution of barriers to inter-specific hybridization is a crucial step in the fertilization of free spawning marine invertebrates. In sea urchins, molecular recognition between sperm and egg ensures species recognition. Here we review the sulfated polysaccharide-based mechanism of sperm-egg recognition in this model organism. The jelly surrounding sea urchin eggs is not a simple accessory structure; it is molecularly complex and intimately involved in gamete recognition. It contains sulfated polysaccharides, sialoglycans and peptides. The sulfated polysaccharides have unique structures, composed of repetitive units of alpha-L-fucose or alpha-L-galactose, which differ among species in the sulfation pattern and/or the position of the glycosidic linkage. The egg jelly sulfated polysaccharides show species-specificity in inducing the sperm acrosome reaction, which is regulated by the structure of the saccharide chain and its sulfation pattern. Other components of the egg jelly do not possess acrosome reaction inducing activity, but sialoglycans act in synergy with the sulfated polysaccharide, potentiating its activity. The system we describe establishes a new view of cell-cell interaction in the sea urchin model system. Here, structural changes in egg jelly polysaccharides modulate cell-cell recognition and species-specificity leading to exocytosis of the acrosome. Therefore, sulfated polysaccharides, in addition to their known functions as growth factors, coagulation factors and selectin binding partners, also function in fertilization. The differentiation of these molecules may play a role in sea urchin speciation.

  20. Model-based automatic target recognition using hierarchical foveal machine vision

    NASA Astrophysics Data System (ADS)

    McKee, Douglas C.; Bandera, Cesar; Ghosal, Sugata; Rauss, Patrick J.

    1996-06-01

    This paper presents a target detection and interrogation techniques for a foveal automatic target recognition (ATR) system based on the hierarchical scale-space processing of imagery from a rectilinear tessellated multiacuity retinotopology. Conventional machine vision captures imagery and applies early vision techniques with uniform resolution throughout the field-of-view (FOV). In contrast, foveal active vision features graded acuity imagers and processing coupled with context sensitive gaze control, analogous to that prevalent throughout vertebrate vision. Foveal vision can operate more efficiently in dynamic scenarios with localized relevance than uniform acuity vision because resolution is treated as a dynamically allocable resource. Foveal ATR exploits the difference between detection and recognition resolution requirements and sacrifices peripheral acuity to achieve a wider FOV (e.g. faster search), greater localized resolution where needed (e.g., more confident recognition at the fovea), and faster frame rates (e.g., more reliable tracking and navigation) without increasing processing requirements. The rectilinearity of the retinotopology supports a data structure that is a subset of the image pyramid. This structure lends itself to multiresolution and conventional 2-D algorithms, and features a shift invariance of perceived target shape that tolerates sensor pointing errors and supports multiresolution model-based techniques. The detection technique described in this paper searches for regions-of- interest (ROIs) using the foveal sensor's wide FOV peripheral vision. ROIs are initially detected using anisotropic diffusion filtering and expansion template matching to a multiscale Zernike polynomial-based target model. Each ROI is then interrogated to filter out false target ROIs by sequentially pointing a higher acuity region of the sensor at each ROI centroid and conducting a fractal dimension test that distinguishes targets from structured clutter.

  1. Research on pavement crack recognition methods based on image processing

    NASA Astrophysics Data System (ADS)

    Cai, Yingchun; Zhang, Yamin

    2011-06-01

    In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.

  2. Evaluating a county-based Healthy nail Salon Recognition Program

    EPA Science Inventory

    To determine whether nail solons that participate in the SF recognition program have reduced measured levels of toluene, methyl methacrylate (MMA), and total volatile organic compounds (TVOC)as compared to nail salons that do not participate. We also evaluated changes in worker ...

  3. Comparison of computer-based and optical face recognition paradigms

    NASA Astrophysics Data System (ADS)

    Alorf, Abdulaziz A.

    The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB(c) software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers

  4. Recognition- and reactivity-based fluorescent probes for studying transition metal signaling in living systems.

    PubMed

    Aron, Allegra T; Ramos-Torres, Karla M; Cotruvo, Joseph A; Chang, Christopher J

    2015-08-18

    Metals are essential for life, playing critical roles in all aspects of the central dogma of biology (e.g., the transcription and translation of nucleic acids and synthesis of proteins). Redox-inactive alkali, alkaline earth, and transition metals such as sodium, potassium, calcium, and zinc are widely recognized as dynamic signals, whereas redox-active transition metals such as copper and iron are traditionally thought of as sequestered by protein ligands, including as static enzyme cofactors, in part because of their potential to trigger oxidative stress and damage via Fenton chemistry. Metals in biology can be broadly categorized into two pools: static and labile. In the former, proteins and other macromolecules tightly bind metals; in the latter, metals are bound relatively weakly to cellular ligands, including proteins and low molecular weight ligands. Fluorescent probes can be useful tools for studying the roles of transition metals in their labile forms. Probes for imaging transition metal dynamics in living systems must meet several stringent criteria. In addition to exhibiting desirable photophysical properties and biocompatibility, they must be selective and show a fluorescence turn-on response to the metal of interest. To meet this challenge, we have pursued two general strategies for metal detection, termed "recognition" and "reactivity". Our design of transition metal probes makes use of a recognition-based approach for copper and nickel and a reactivity-based approach for cobalt and iron. This Account summarizes progress in our laboratory on both the development and application of fluorescent probes to identify and study the signaling roles of transition metals in biology. In conjunction with complementary methods for direct metal detection and genetic and/or pharmacological manipulations, fluorescent probes for transition metals have helped reveal a number of principles underlying transition metal dynamics. In this Account, we give three recent

  5. Cueing vocabulary during sleep increases theta activity during later recognition testing.

    PubMed

    Schreiner, Thomas; Göldi, Maurice; Rasch, Björn

    2015-11-01

    Neural oscillations in the theta band have repeatedly been implicated in successful memory encoding and retrieval. Several recent studies have shown that memory retrieval can be facilitated by reactivating memories during their consolidation during sleep. However, it is still unknown whether reactivation during sleep also enhances subsequent retrieval-related neural oscillations. We have recently demonstrated that foreign vocabulary cues presented during sleep improve later recall of the associated translations. Here, we examined the effect of cueing foreign vocabulary during sleep on oscillatory activity during subsequent recognition testing after sleep. We show that those words that were replayed during sleep after learning (cued words) elicited stronger centroparietal theta activity during recognition as compared to noncued words. The reactivation-induced increase in theta oscillations during later recognition testing might reflect a strengthening of individual memory traces and the integration of the newly learned words into the mental lexicon by cueing during sleep.

  6. Robust and discriminating method for face recognition based on correlation technique and independent component analysis model.

    PubMed

    Alfalou, A; Brosseau, C

    2011-03-01

    We demonstrate a novel technique for face recognition. Our approach relies on the performances of a strongly discriminating optical correlation method along with the robustness of the independent component analysis (ICA) model. Simulations were performed to illustrate how this algorithm can identify a face with images from the Pointing Head Pose Image Database. While maintaining algorithmic simplicity, this approach based on ICA representation significantly increases the true recognition rate compared to that obtained using our previously developed all-numerical ICA identity recognition method and another method based on optical correlation and a standard composite filter. PMID:21368935

  7. Neural speech recognition: continuous phoneme decoding using spatiotemporal representations of human cortical activity

    NASA Astrophysics Data System (ADS)

    Moses, David A.; Mesgarani, Nima; Leonard, Matthew K.; Chang, Edward F.

    2016-10-01

    Objective. The superior temporal gyrus (STG) and neighboring brain regions play a key role in human language processing. Previous studies have attempted to reconstruct speech information from brain activity in the STG, but few of them incorporate the probabilistic framework and engineering methodology used in modern speech recognition systems. In this work, we describe the initial efforts toward the design of a neural speech recognition (NSR) system that performs continuous phoneme recognition on English stimuli with arbitrary vocabulary sizes using the high gamma band power of local field potentials in the STG and neighboring cortical areas obtained via electrocorticography. Approach. The system implements a Viterbi decoder that incorporates phoneme likelihood estimates from a linear discriminant analysis model and transition probabilities from an n-gram phonemic language model. Grid searches were used in an attempt to determine optimal parameterizations of the feature vectors and Viterbi decoder. Main results. The performance of the system was significantly improved by using spatiotemporal representations of the neural activity (as opposed to purely spatial representations) and by including language modeling and Viterbi decoding in the NSR system. Significance. These results emphasize the importance of modeling the temporal dynamics of neural responses when analyzing their variations with respect to varying stimuli and demonstrate that speech recognition techniques can be successfully leveraged when decoding speech from neural signals. Guided by the results detailed in this work, further development of the NSR system could have applications in the fields of automatic speech recognition and neural prosthetics.

  8. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  9. Secondary iris recognition method based on local energy-orientation feature

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing

    2015-01-01

    This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.

  10. Long-term activity recognition from wristwatch accelerometer data.

    PubMed

    Garcia-Ceja, Enrique; Brena, Ramon F; Carrasco-Jimenez, Jose C; Garrido, Leonardo

    2014-01-01

    With the development of wearable devices that have several embedded sensors, it is possible to collect data that can be analyzed in order to understand the user's needs and provide personalized services. Examples of these types of devices are smartphones, fitness-bracelets, smartwatches, just to mention a few. In the last years, several works have used these devices to recognize simple activities like running, walking, sleeping, and other physical activities. There has also been research on recognizing complex activities like cooking, sporting, and taking medication, but these generally require the installation of external sensors that may become obtrusive to the user. In this work we used acceleration data from a wristwatch in order to identify long-term activities. We compare the use of Hidden Markov Models and Conditional Random Fields for the segmentation task. We also added prior knowledge into the models regarding the duration of the activities by coding them as constraints and sequence patterns were added in the form of feature functions. We also performed subclassing in order to deal with the problem of intra-class fragmentation, which arises when the same label is applied to activities that are conceptually the same but very different from the acceleration point of view. PMID:25436652

  11. The disruptive effects of processing fluency on familiarity-based recognition in amnesia.

    PubMed

    Ozubko, Jason D; Yonelinas, Andrew P

    2014-02-01

    Amnesia leads to a deficit in recollection that leaves familiarity-based recognition relatively spared. Familiarity is thought to be based on the fluent processing of studied items compared to novel items. However, whether amnesic patients respond normally to direct manipulations of processing fluency is not yet known. In the current study, we manipulated processing fluency by preceding each test item with a semantically related or unrelated prime item, and measured both recollection and familiarity using a remember-know recognition procedure. In healthy controls, enhancing processing fluency increased familiarity-based recognition responses for both old and new words, leaving familiarity-based accuracy constant. However, in patients with MTL damage, enhancing fluency only increased familiarity-based recognition responses for new items, resulting in decreased familiarity-based recognition accuracy. Importantly, this fluency-related decrease in recognition accuracy was not due to overall lower levels of performance or impaired recollection of studied items because it was not observed in healthy subjects that studied words under conditions that lowered performance by reducing recollection. The results indicate that direct manipulations of processing fluency can disrupt familiarity-based discrimination in amnesia. Potential accounts of these findings are discussed.

  12. Wavelet-based learning vector quantization for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Chan, Lipchen A.; Nasrabadi, Nasser M.; Mirelli, Vincent

    1996-06-01

    An automatic target recognition classifier is constructed that uses a set of dedicated vector quantizers (VQs). The background pixels in each input image are properly clipped out by a set of aspect windows. The extracted target area for each aspect window is then enlarged to a fixed size, after which a wavelet decomposition splits the enlarged extraction into several subbands. A dedicated VQ codebook is generated for each subband of a particular target class at a specific range of aspects. Thus, each codebook consists of a set of feature templates that are iteratively adapted to represent a particular subband of a given target class at a specific range of aspects. These templates are then further trained by a modified learning vector quantization (LVQ) algorithm that enhances their discriminatory characteristics. A recognition rate of 69.0 percent is achieved on a highly cluttered test set.

  13. Model Based Object Recognition Using LORD LTS-300 Touch Sensor

    NASA Astrophysics Data System (ADS)

    Roach, J. W.; Paripati, P. K.; Wade, M.

    1988-03-01

    This paper reports the result of a model driven touch sensor recognition experiment. The touch sensor employed is a large field tactile array. Object features appropriate for touch sensor recognition are extracted from a geometric model of an object, the dual spherical image. Both geometric and dynamic features are used to identify objects and their position and orientation on the touch sensor. Experiments show that geometric features extracted from the model are effective but that dynamic features must be determined empirically. Correct object identification rates even for very similar objects exceed ninety percent, a success rate much higher than we would have expected from only two-dimensional contact patterns. Position and orientation of objects once identified are very reliable. We conclude that large field tactile sensors could prove very useful in the automatic palletizing problem when object models (from a CAD system, for example) can be utilized.

  14. Design and implementation of face recognition system based on Windows

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Liu, Ting; Li, Ailan

    2015-07-01

    In view of the basic Windows login password input way lacking of safety and convenient operation, we will introduce the biometrics technology, face recognition, into the computer to login system. Not only can it encrypt the computer system, also according to the level to identify administrators at all levels. With the enhancement of the system security, user input can neither be a cumbersome nor worry about being stolen password confidential.

  15. Feature activation during word recognition: action, visual, and associative-semantic priming effects

    PubMed Central

    Lam, Kevin J. Y.; Dijkstra, Ton; Rueschemeyer, Shirley-Ann

    2015-01-01

    Embodied theories of language postulate that language meaning is stored in modality-specific brain areas generally involved in perception and action in the real world. However, the temporal dynamics of the interaction between modality-specific information and lexical-semantic processing remain unclear. We investigated the relative timing at which two types of modality-specific information (action-based and visual-form information) contribute to lexical-semantic comprehension. To this end, we applied a behavioral priming paradigm in which prime and target words were related with respect to (1) action features, (2) visual features, or (3) semantically associative information. Using a Go/No-Go lexical decision task, priming effects were measured across four different inter-stimulus intervals (ISI = 100, 250, 400, and 1000 ms) to determine the relative time course of the different features. Notably, action priming effects were found in ISIs of 100, 250, and 1000 ms whereas a visual priming effect was seen only in the ISI of 1000 ms. Importantly, our data suggest that features follow different time courses of activation during word recognition. In this regard, feature activation is dynamic, measurable in specific time windows but not in others. Thus the current study (1) demonstrates how multiple ISIs can be used within an experiment to help chart the time course of feature activation and (2) provides new evidence for embodied theories of language. PMID:26074836

  16. Recognition of user's activity for adaptive cooperative assistance in robotic surgery.

    PubMed

    Nessi, Federico; Beretta, Elisa; Ferrigno, Giancarlo; De Momi, Elena

    2015-01-01

    During hands-on robotic surgery it is advisable to know how and when to provide the surgeon with different assistance levels with respect to the current performed activity. Gesteme-based on-line classification requires the definition of a complete set of primitives and the observation of large signal percentage. In this work an on-line, gesteme-free activity recognition method is addressed. The algorithm models the guidance forces and the resulting trajectory of the manipulator with 26 low-level components of a Gaussian Mixture Model (GMM). Temporal switching among the components is modeled with a Hidden Markov Model (HMM). Tests are performed in a simplified scenario over a pool of 5 non-surgeon users. Classification accuracy resulted higher than 89% after the observation of a 300 ms-long signal. Future work will address the use of the current detected activity to on-line trigger different strategies to control the manipulator and adapt the level of assistance. PMID:26737482

  17. Contextual action recognition and target localization with an active allocation of attention on a humanoid robot.

    PubMed

    Ognibene, Dimitri; Chinellato, Eris; Sarabia, Miguel; Demiris, Yiannis

    2013-09-01

    Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. Inspired by the cognitive mechanisms underlying human social behaviour, we have designed and implemented a system for a dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task exploiting its own action execution predictions. Our humanoid robot is able, during the observation of a partner's reaching movement, to contextually estimate the goal position of the partner's hand and the location in space of the candidate targets. This is done while actively gazing around the environment, with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control, based on the internal simulation of actions, provides a relevant advantage with respect to other action perception approaches, both in terms of estimation precision and of time required to recognize an action. Moreover, our model reproduces and extends some experimental results on human attention during an action perception.

  18. Segment-based acoustic models for continuous speech recognition

    NASA Astrophysics Data System (ADS)

    Ostendorf, Mari; Rohlicek, J. R.

    1994-02-01

    In work, we are interested in the problem of large vocabulary, speaker-independent continuous speech recognition, and primarily in the acoustic modeling component of this problem. In developing acoustic models for speech recognition, we have conflicting goals. On one hand, the models should be robust to inter- and intra-speaker variability, to the use of a different vocabulary in recognition than in training, and to the effects of moderately noisy environments. In order to accomplish this, we need to model gross features and global trends. On the other hand, the models must be sensitive and detailed enough to detect fine acoustic differences between similar words in a large vocabulary task. To answer these opposing demands requires improvements in acoustic modeling at several levels: the frame level (e.g. signal processing), the phoneme level (e.g. modeling feature dynamics), and the utterance level (e.g. defining a structural context for representing the intra-utterance dependence across phonemes). This project address the problem of acoustic modeling specifically focusing on modeling at the segment level and above.

  19. A Fast Goal Recognition Technique Based on Interaction Estimates

    NASA Technical Reports Server (NTRS)

    E-Martin, Yolanda; R-Moreno, Maria D.; Smith, David E.

    2015-01-01

    Goal Recognition is the task of inferring an actor's goals given some or all of the actor's observed actions. There is considerable interest in Goal Recognition for use in intelligent personal assistants, smart environments, intelligent tutoring systems, and monitoring user's needs. In much of this work, the actor's observed actions are compared against a generated library of plans. Recent work by Ramirez and Geffner makes use of AI planning to determine how closely a sequence of observed actions matches plans for each possible goal. For each goal, this is done by comparing the cost of a plan for that goal with the cost of a plan for that goal that includes the observed actions. This approach yields useful rankings, but is impractical for real-time goal recognition in large domains because of the computational expense of constructing plans for each possible goal. In this paper, we introduce an approach that propagates cost and interaction information in a plan graph, and uses this information to estimate goal probabilities. We show that this approach is much faster, but still yields high quality results.

  20. Tautomerization-dependent recognition and excision of oxidation damage in base-excision DNA repair.

    PubMed

    Zhu, Chenxu; Lu, Lining; Zhang, Jun; Yue, Zongwei; Song, Jinghui; Zong, Shuai; Liu, Menghao; Stovicek, Olivia; Gao, Yi Qin; Yi, Chengqi

    2016-07-12

    NEIL1 (Nei-like 1) is a DNA repair glycosylase guarding the mammalian genome against oxidized DNA bases. As the first enzymes in the base-excision repair pathway, glycosylases must recognize the cognate substrates and catalyze their excision. Here we present crystal structures of human NEIL1 bound to a range of duplex DNA. Together with computational and biochemical analyses, our results suggest that NEIL1 promotes tautomerization of thymine glycol (Tg)-a preferred substrate-for optimal binding in its active site. Moreover, this tautomerization event also facilitates NEIL1-catalyzed Tg excision. To our knowledge, the present example represents the first documented case of enzyme-promoted tautomerization for efficient substrate recognition and catalysis in an enzyme-catalyzed reaction. PMID:27354518

  1. Tautomerization-dependent recognition and excision of oxidation damage in base-excision DNA repair.

    PubMed

    Zhu, Chenxu; Lu, Lining; Zhang, Jun; Yue, Zongwei; Song, Jinghui; Zong, Shuai; Liu, Menghao; Stovicek, Olivia; Gao, Yi Qin; Yi, Chengqi

    2016-07-12

    NEIL1 (Nei-like 1) is a DNA repair glycosylase guarding the mammalian genome against oxidized DNA bases. As the first enzymes in the base-excision repair pathway, glycosylases must recognize the cognate substrates and catalyze their excision. Here we present crystal structures of human NEIL1 bound to a range of duplex DNA. Together with computational and biochemical analyses, our results suggest that NEIL1 promotes tautomerization of thymine glycol (Tg)-a preferred substrate-for optimal binding in its active site. Moreover, this tautomerization event also facilitates NEIL1-catalyzed Tg excision. To our knowledge, the present example represents the first documented case of enzyme-promoted tautomerization for efficient substrate recognition and catalysis in an enzyme-catalyzed reaction.

  2. Activity recognition using Video Event Segmentation with Text (VEST)

    NASA Astrophysics Data System (ADS)

    Holloway, Hillary; Jones, Eric K.; Kaluzniacki, Andrew; Blasch, Erik; Tierno, Jorge

    2014-06-01

    Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.

  3. Phonological Activation during Visual Word Recognition in Deaf and Hearing Children

    ERIC Educational Resources Information Center

    Ormel, Ellen; Hermans, Daan; Knoors, Harry; Hendriks, Angelique; Verhoeven, Ludo

    2010-01-01

    Purpose: Phonological activation during visual word recognition was studied in deaf and hearing children under two circumstances: (a) when the use of phonology was not required for task performance and might even hinder it and (b) when the use of phonology was critical for task performance. Method: Deaf children mastering written Dutch and Sign…

  4. Dealing with the effects of sensor displacement in wearable activity recognition.

    PubMed

    Banos, Oresti; Toth, Mate Attila; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2014-06-06

    Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements.

  5. Dealing with the Effects of Sensor Displacement in Wearable Activity Recognition

    PubMed Central

    Banos, Oresti; Toth, Mate Attila; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2014-01-01

    Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements. PMID:24915181

  6. Noninvasive imaging of sialyltransferase activity in living cells by chemoselective recognition

    NASA Astrophysics Data System (ADS)

    Bao, Lei; Ding, Lin; Yang, Min; Ju, Huangxian

    2015-06-01

    To elucidate the biological and pathological functions of sialyltransferases (STs), intracellular ST activity evaluation is necessary. Focusing on the lack of noninvasive methods for obtaining the dynamic activity information, this work designs a sensing platform for in situ FRET imaging of intracellular ST activity and tracing of sialylation process. The system uses tetramethylrhodamine isothiocyanate labeled asialofetuin (TRITC-AF) as a ST substrate and fluorescein isothiocyanate labeled 3-aminophenylboronic acid (FITC-APBA) as the chemoselective recognition probe of sialylation product, both of which are encapsulated in a liposome vesicle for cellular delivery. The recognition of FITC-APBA to sialylated TRITC-AF leads to the FRET signal that is analyzed by FRET efficiency images. This strategy has been used to evaluate the correlation of ST activity with malignancy and cell surface sialylation, and the sialylation inhibition activity of inhibitors. This work provides a powerful noninvasive tool for glycan biosynthesis mechanism research, cancer diagnostics and drug development.

  7. The Activation of Embedded Words in Spoken Word Recognition

    PubMed Central

    Zhang, Xujin; Samuel, Arthur G.

    2015-01-01

    The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions. Facilitation of lexical decision responses to targets (e.g., pig) associated with words embedded in primes (e.g., hamster) indexed activation of the embedded words (e.g., ham). When the listening conditions were optimal, isolated embedded words (e.g., ham) primed their targets in all six conditions (Experiment 1a). Within carrier words (e.g., hamster), the same set of embedded words produced priming only when they were at the beginning or comprised a large proportion of the carrier word (Experiment 1b). When the listening conditions were made suboptimal by expanding or compressing the primes, significant priming was found for isolated embedded words (Experiment 2a), but no priming was produced when the carrier words were compressed/expanded (Experiment 2b). Similarly, priming was eliminated when the carrier words were presented with one segment replaced by noise (Experiment 3). When cognitive load was imposed, priming for embedded words was again found when they were presented in isolation (Experiment 4a), but not when they were embedded in the carrier words (Experiment 4b). The results suggest that both embedded position and proportion play important roles in the activation of embedded words, but that such activation only occurs under unusually good listening conditions. PMID:25593407

  8. Implementation of a Peltier-based cooling device for localized deep cortical deactivation during in vivo object recognition testing

    NASA Astrophysics Data System (ADS)

    Marra, Kyle; Graham, Brett; Carouso, Samantha; Cox, David

    2012-02-01

    While the application of local cortical cooling has recently become a focus of neurological research, extended localized deactivation deep within brain structures is still unexplored. Using a wirelessly controlled thermoelectric (Peltier) device and water-based heat sink, we have achieved inactivating temperatures (<20 C) at greater depths (>8 mm) than previously reported. After implanting the device into Long Evans rats' basolateral amygdala (BLA), an inhibitory brain center that controls anxiety and fear, we ran an open field test during which anxiety-driven behavioral tendencies were observed to decrease during cooling, thus confirming the device's effect on behavior. Our device will next be implanted in the rats' temporal association cortex (TeA) and recordings from our signal-tracing multichannel microelectrodes will measure and compare activated and deactivated neuronal activity so as to isolate and study the TeA signals responsible for object recognition. Having already achieved a top performing computational face-recognition system, the lab will utilize this TeA activity data to generalize its computational efforts of face recognition to achieve general object recognition.

  9. Bilateral thalamic lesions affect recollection- and familiarity-based recognition memory judgments.

    PubMed

    Kishiyama, Mark M; Yonelinas, Andrew P; Kroll, Neal E A; Lazzara, Michele M; Nolan, Eric C; Jones, Edward G; Jagust, William J

    2005-12-01

    The contribution of the thalamus to different forms of explicit memory is poorly understood. In the current study, explicit memory performance was examined in a 40-year-old male (RG) with bilateral anterior and medial thalamic lesions. Standardized tests indicated that the patient exhibited more severe recall than recognition deficits and his performance was generally worse for verbal compared to nonverbal memory. Recognition memory tests using the remember-know (R/K) procedure and the confidence-based receiver operating characteristic (ROC) procedure were used to examine recollection- and familiarity-based recognition. These tests revealed that RG had deficits in recollection and smaller, but consistent deficits in familiarity. The results are in agreement with models indicating that the anteromedial thalamus is important for both recollection- and familiarity-based recognition memory. PMID:16353367

  10. When Passive Feels Active - Delusion-Proneness Alters Self-Recognition in the Moving Rubber Hand Illusion

    PubMed Central

    Louzolo, Anaïs; Kalckert, Andreas; Petrovic, Predrag

    2015-01-01

    Psychotic patients have problems with bodily self-recognition such as the experience of self-produced actions (sense of agency) and the perception of the body as their own (sense of ownership). While it has been shown that such impairments in psychotic patients can be explained by hypersalient processing of external sensory input it has also been suggested that they lack normal efference copy in voluntary action. However, it is not known how problems with motor predictions like efference copy contribute to impaired sense of agency and ownership in psychosis or psychosis-related states. We used a rubber hand illusion based on finger movements and measured sense of agency and ownership to compute a bodily self-recognition score in delusion-proneness (indexed by Peters’ Delusion Inventory - PDI). A group of healthy subjects (n=71) experienced active movements (involving motor predictions) or passive movements (lacking motor predictions). We observed a highly significant correlation between delusion-proneness and self-recognition in the passive conditions, while no such effect was observed in the active conditions. This was seen for both ownership and agency scores. The result suggests that delusion-proneness is associated with hypersalient external input in passive conditions, resulting in an abnormal experience of the illusion. We hypothesize that this effect is not present in the active condition because deficient motor predictions counteract hypersalience in psychosis proneness. PMID:26090797

  11. Robust and Effective Component-based Banknote Recognition for the Blind

    PubMed Central

    Hasanuzzaman, Faiz M.; Yang, Xiaodong; Tian, YingLi

    2012-01-01

    We develop a novel camera-based computer vision technology to automatically recognize banknotes for assisting visually impaired people. Our banknote recognition system is robust and effective with the following features: 1) high accuracy: high true recognition rate and low false recognition rate, 2) robustness: handles a variety of currency designs and bills in various conditions, 3) high efficiency: recognizes banknotes quickly, and 4) ease of use: helps blind users to aim the target for image capture. To make the system robust to a variety of conditions including occlusion, rotation, scaling, cluttered background, illumination change, viewpoint variation, and worn or wrinkled bills, we propose a component-based framework by using Speeded Up Robust Features (SURF). Furthermore, we employ the spatial relationship of matched SURF features to detect if there is a bill in the camera view. This process largely alleviates false recognition and can guide the user to correctly aim at the bill to be recognized. The robustness and generalizability of the proposed system is evaluated on a dataset including both positive images (with U.S. banknotes) and negative images (no U.S. banknotes) collected under a variety of conditions. The proposed algorithm, achieves 100% true recognition rate and 0% false recognition rate. Our banknote recognition system is also tested by blind users. PMID:22661884

  12. Study on recognition algorithm for paper currency numbers based on neural network

    NASA Astrophysics Data System (ADS)

    Li, Xiuyan; Liu, Tiegen; Li, Yuanyao; Zhang, Zhongchuan; Deng, Shichao

    2008-12-01

    Based on the unique characteristic, the paper currency numbers can be put into record and the automatic identification equipment for paper currency numbers is supplied to currency circulation market in order to provide convenience for financial sectors to trace the fiduciary circulation socially and provide effective supervision on paper currency. Simultaneously it is favorable for identifying forged notes, blacklisting the forged notes numbers and solving the major social problems, such as armor cash carrier robbery, money laundering. For the purpose of recognizing the paper currency numbers, a recognition algorithm based on neural network is presented in the paper. Number lines in original paper currency images can be draw out through image processing, such as image de-noising, skew correction, segmentation, and image normalization. According to the different characteristics between digits and letters in serial number, two kinds of classifiers are designed. With the characteristics of associative memory, optimization-compute and rapid convergence, the Discrete Hopfield Neural Network (DHNN) is utilized to recognize the letters; with the characteristics of simple structure, quick learning and global optimum, the Radial-Basis Function Neural Network (RBFNN) is adopted to identify the digits. Then the final recognition results are obtained by combining the two kinds of recognition results in regular sequence. Through the simulation tests, it is confirmed by simulation results that the recognition algorithm of combination of two kinds of recognition methods has such advantages as high recognition rate and faster recognition simultaneously, which is worthy of broad application prospect.

  13. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  14. Bilingual Word Recognition in Deaf and Hearing Signers: Effects of Proficiency and Language Dominance on Cross-Language Activation

    ERIC Educational Resources Information Center

    Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin

    2014-01-01

    Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…

  15. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms.

  16. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    PubMed

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms. PMID:26169316

  17. EMG-based facial gesture recognition through versatile elliptic basis function neural network

    PubMed Central

    2013-01-01

    Background Recently, the recognition of different facial gestures using facial neuromuscular activities has been proposed for human machine interfacing applications. Facial electromyograms (EMGs) analysis is a complicated field in biomedical signal processing where accuracy and low computational cost are significant concerns. In this paper, a very fast versatile elliptic basis function neural network (VEBFNN) was proposed to classify different facial gestures. The effectiveness of different facial EMG time-domain features was also explored to introduce the most discriminating. Methods In this study, EMGs of ten facial gestures were recorded from ten subjects using three pairs of surface electrodes in a bi-polar configuration. The signals were filtered and segmented into distinct portions prior to feature extraction. Ten different time-domain features, namely, Integrated EMG, Mean Absolute Value, Mean Absolute Value Slope, Maximum Peak Value, Root Mean Square, Simple Square Integral, Variance, Mean Value, Wave Length, and Sign Slope Changes were extracted from the EMGs. The statistical relationships between these features were investigated by Mutual Information measure. Then, the feature combinations including two to ten single features were formed based on the feature rankings appointed by Minimum-Redundancy-Maximum-Relevance (MRMR) and Recognition Accuracy (RA) criteria. In the last step, VEBFNN was employed to classify the facial gestures. The effectiveness of single features as well as the feature sets on the system performance was examined by considering the two major metrics, recognition accuracy and training time. Finally, the proposed classifier was assessed and compared with conventional methods support vector machines and multilayer perceptron neural network. Results The average classification results showed that the best performance for recognizing facial gestures among all single/multi-features was achieved by Maximum Peak Value with 87.1% accuracy

  18. Toward an EEG-based recognition of music liking using time-frequency analysis.

    PubMed

    Hadjidimitriou, Stelios K; Hadjileontiadis, Leontios J

    2012-12-01

    Affective phenomena, as reflected through brain activity, could constitute an effective index for the detection of music preference. In this vein, this paper focuses on the discrimination between subjects' electroencephalogram (EEG) responses to self-assessed liked or disliked music, acquired during an experimental procedure, by evaluating different feature extraction approaches and classifiers to this end. Feature extraction is based on time-frequency (TF) analysis by implementing three TF techniques, i.e., spectrogram, Zhao-Atlas-Marks distribution and Hilbert-Huang spectrum (HHS). Feature estimation also accounts for physiological parameters that relate to EEG frequency bands, reference states, time intervals, and hemispheric asymmetries. Classification is performed by employing four classifiers, i.e., support vector machines, k-nearest neighbors (k -NN), quadratic and Mahalanobis distance-based discriminant analyses. According to the experimental results across nine subjects, best classification accuracy {86.52 (±0.76)%} was achieved using k-NN and HHS-based feature vectors ( FVs) representing a bilateral average activity, referred to a resting period, in β (13-30 Hz) and γ (30-49 Hz) bands. Activity in these bands may point to a connection between music preference and emotional arousal phenomena. Furthermore, HHS-based FVs were found to be robust against noise corruption. The outcomes of this study provide early evidence and pave the way for the development of a generalized brain computer interface for music preference recognition. PMID:23033323

  19. Toward an EEG-based recognition of music liking using time-frequency analysis.

    PubMed

    Hadjidimitriou, Stelios K; Hadjileontiadis, Leontios J

    2012-12-01

    Affective phenomena, as reflected through brain activity, could constitute an effective index for the detection of music preference. In this vein, this paper focuses on the discrimination between subjects' electroencephalogram (EEG) responses to self-assessed liked or disliked music, acquired during an experimental procedure, by evaluating different feature extraction approaches and classifiers to this end. Feature extraction is based on time-frequency (TF) analysis by implementing three TF techniques, i.e., spectrogram, Zhao-Atlas-Marks distribution and Hilbert-Huang spectrum (HHS). Feature estimation also accounts for physiological parameters that relate to EEG frequency bands, reference states, time intervals, and hemispheric asymmetries. Classification is performed by employing four classifiers, i.e., support vector machines, k-nearest neighbors (k -NN), quadratic and Mahalanobis distance-based discriminant analyses. According to the experimental results across nine subjects, best classification accuracy {86.52 (±0.76)%} was achieved using k-NN and HHS-based feature vectors ( FVs) representing a bilateral average activity, referred to a resting period, in β (13-30 Hz) and γ (30-49 Hz) bands. Activity in these bands may point to a connection between music preference and emotional arousal phenomena. Furthermore, HHS-based FVs were found to be robust against noise corruption. The outcomes of this study provide early evidence and pave the way for the development of a generalized brain computer interface for music preference recognition.

  20. The effect of gaze direction on three-dimensional face recognition in infant brain activity.

    PubMed

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K; Kakigi, Ryusuke

    2012-09-12

    In three-dimensional face recognition studies, it is well known that viewing rotating faces enhance face recognition. For infants, our previous study indicated that 8-month-old infants showed recognition of three-dimensional rotating faces with a direct gaze, and they did not learn with an averted gaze. This suggests that gaze direction may affect three-dimensional face recognition in infants. In this experiment, we used near-infrared spectroscopy to measure infants' hemodynamic responses to averted gaze and direct gaze. We hypothesized that infants would show different neural activity for averted and direct gazes. The responses were compared with the baseline activation during the presentation of non-face objects. We found that the concentration of oxyhemoglobin increased in the temporal cortex on both sides only during the presentation of averted gaze compared with that of the baseline period. This is the first study to show that infants' brain activity in three-dimensional face processing is different between averted gaze and direct gaze.

  1. Gait-based person recognition using arbitrary view transformation model.

    PubMed

    Muramatsu, Daigo; Shiraishi, Akira; Makihara, Yasushi; Uddin, Md Zasim; Yagi, Yasushi

    2015-01-01

    Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios. PMID:25423652

  2. Driver fatigue recognition based on supervised LPP and MKSVM

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Zhang, Wei

    2011-06-01

    Driver fatigue is a significant factor in many traffic accidents. In this paper, a novel approach is proposed to recognize driver fatigue. First of all, in order to extract effective feature of fatigue expression from face images, supervised locality preserving projections (SLPP) is adopted, which can solve the problem that LPP ignores the within-class local structure by adopting prior class label information. And then multiple kernels support vector machines (MKSVM) is employed to recognizing fatigue expression, Compared to SVM, which can improve the interpretability of decision function and performance of fatigue recognition. Experimental results are shown to demonstrate the effectiveness of the proposed method.

  3. Robust Speaker Authentication Based on Combined Speech and Voiceprint Recognition

    NASA Astrophysics Data System (ADS)

    Malcangi, Mario

    2009-08-01

    Personal authentication is becoming increasingly important in many applications that have to protect proprietary data. Passwords and personal identification numbers (PINs) prove not to be robust enough to ensure that unauthorized people do not use them. Biometric authentication technology may offer a secure, convenient, accurate solution but sometimes fails due to its intrinsically fuzzy nature. This research aims to demonstrate that combining two basic speech processing methods, voiceprint identification and speech recognition, can provide a very high degree of robustness, especially if fuzzy decision logic is used.

  4. Low-quality fingerprint recognition using a limited ellipse-band-based matching method.

    PubMed

    He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2015-06-01

    Current fingerprint recognition technologies are based mostly on the minutia algorithms, which cannot recognize fingerprint images in low-quality conditions. This paper proposes a novel recognition algorithm using a limited ellipse-band-based matching method. It uses the Fourier-Mellin transformation method to improve the limitation of the original algorithm, which cannot resist rotation changes. Furthermore, an ellipse band on the frequency amplitude is used to suppress noise that is introduced by the high-frequency parts of images. Finally, the recognition result is obtained by considering both the contrast and position correlation peaks. The experimental results show that the proposed algorithm can increase the recognition accuracy, particularly of images in low-quality conditions. PMID:26367052

  5. Single-sample face recognition based on intra-class differences in a variation model.

    PubMed

    Cai, Jun; Chen, Jing; Liang, Xing

    2015-01-01

    In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. PMID:25580904

  6. Low-quality fingerprint recognition using a limited ellipse-band-based matching method.

    PubMed

    He, Zaixing; Zhao, Xinyue; Zhang, Shuyou

    2015-06-01

    Current fingerprint recognition technologies are based mostly on the minutia algorithms, which cannot recognize fingerprint images in low-quality conditions. This paper proposes a novel recognition algorithm using a limited ellipse-band-based matching method. It uses the Fourier-Mellin transformation method to improve the limitation of the original algorithm, which cannot resist rotation changes. Furthermore, an ellipse band on the frequency amplitude is used to suppress noise that is introduced by the high-frequency parts of images. Finally, the recognition result is obtained by considering both the contrast and position correlation peaks. The experimental results show that the proposed algorithm can increase the recognition accuracy, particularly of images in low-quality conditions.

  7. Single-Sample Face Recognition Based on Intra-Class Differences in a Variation Model

    PubMed Central

    Cai, Jun; Chen, Jing; Liang, Xing

    2015-01-01

    In this paper, a novel random facial variation modeling system for sparse representation face recognition is presented. Although recently Sparse Representation-Based Classification (SRC) has represented a breakthrough in the field of face recognition due to its good performance and robustness, there is the critical problem that SRC needs sufficiently large training samples to achieve good performance. To address these issues, we challenge the single-sample face recognition problem with intra-class differences of variation in a facial image model based on random projection and sparse representation. In this paper, we present a developed facial variation modeling systems composed only of various facial variations. We further propose a novel facial random noise dictionary learning method that is invariant to different faces. The experiment results on the AR, Yale B, Extended Yale B, MIT and FEI databases validate that our method leads to substantial improvements, particularly in single-sample face recognition problems. PMID:25580904

  8. [Low frequency-based non-uniform sampling strategy to improve Chinese recognition in cochlear implant].

    PubMed

    Ni, Saihua; Sun, Wenye; Sun, Baoyin; Zhou, Qiang; Wang, Qiang; Wang, Zhenming; Gu, Jihua; Tao, Zhi

    2014-06-01

    To enhance speech recognition, as well as Mandarin tone recognition in noice, we proposed a speech coding strategy called zero-crossing of fine structure in low frequency (LFFS) for cochlear implant based on low frequency non-uniform sampling (LFFS for short). In the range of frequency perceived boundary of human ear, we used zero-crossing time of the fine structure to generate the stimulus pulse sequences based on the frequency selection rule. Acoustic simulation results showed that although on quiet background the performance of LFFS was similar to continuous interleaved sampling (CIS), on the noise background the performance of LFFS in Chinese tones, words and sentences were significantly better than CIS. In addition to this, we also got better Mandarin recognition factors distribution by using the improved index distribution model. LFFS contains more tonal information which was able to effectively improve Mandarin recognition of the cochlear implant. PMID:25219227

  9. Active recognition enhances the representation of behaviorally relevant information in single auditory forebrain neurons

    PubMed Central

    Knudsen, Daniel P.

    2013-01-01

    Sensory systems are dynamic. They must process a wide range of natural signals that facilitate adaptive behaviors in a manner that depends on an organism's constantly changing goals. A full understanding of the sensory physiology that underlies adaptive natural behaviors must therefore account for the activity of sensory systems in light of these behavioral goals. Here we present a novel technique that combines in vivo electrophysiological recording from awake, freely moving songbirds with operant conditioning techniques that allow control over birds' recognition of conspecific song, a widespread natural behavior in songbirds. We show that engaging in a vocal recognition task alters the response properties of neurons in the caudal mesopallium (CM), an avian analog of mammalian auditory cortex, in European starlings. Compared with awake, passive listening, active engagement of subjects in an auditory recognition task results in neurons responding to fewer song stimuli and a decrease in the trial-to-trial variability in their driven firing rates. Mean firing rates also change during active recognition, but not uniformly. Relative to nonengaged listening, active recognition causes increases in the driven firing rates in some neurons, decreases in other neurons, and stimulus-specific changes in other neurons. These changes lead to both an increase in stimulus selectivity and an increase in the information conveyed by the neurons about the animals' behavioral task. This study demonstrates the behavioral dependence of neural responses in the avian auditory forebrain and introduces the starling as a model for real-time monitoring of task-related neural processing of complex auditory objects. PMID:23303858

  10. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  11. Smartphone-based recognition of states and state changes in bipolar disorder patients.

    PubMed

    Grünerbl, Agnes; Muaremi, Amir; Osmani, Venet; Bahle, Gernot; Ohler, Stefan; Tröster, Gerhard; Mayora, Oscar; Haring, Christian; Lukowicz, Paul

    2015-01-01

    Today's health care is difficult to imagine without the possibility to objectively measure various physiological parameters related to patients' symptoms (from temperature through blood pressure to complex tomographic procedures). Psychiatric care remains a notable exception that heavily relies on patient interviews and self-assessment. This is due to the fact that mental illnesses manifest themselves mainly in the way patients behave throughout their daily life and, until recently there were no "behavior measurement devices." This is now changing with the progress in wearable activity recognition and sensor enabled smartphones. In this paper, we introduce a system, which, based on smartphone-sensing is able to recognize depressive and manic states and detect state changes of patients suffering from bipolar disorder. Drawing upon a real-life dataset of ten patients, recorded over a time period of 12 weeks (in total over 800 days of data tracing 17 state changes) by four different sensing modalities, we could extract features corresponding to all disease-relevant aspects in behavior. Using these features, we gain recognition accuracies of 76% by fusing all sensor modalities and state change detection precision and recall of over 97%. This paper furthermore outlines the applicability of this system in the physician-patient relations in order to facilitate the life and treatment of bipolar patients.

  12. Multiple base-recognition sites in a biological nanopore – two heads are better than one

    PubMed Central

    Stoddart, David; Maglia, Giovanni; Mikhailova, Ellina; Heron, Andrew J.; Bayley, Hagan

    2011-01-01

    Ultra-rapid sequencing of DNA strands with nanopores is under intense investigation. The αHL protein nanopore is a leading candidate sensor for this approach. Multiple base-recognition sites have been identified in engineered αHL pores. By using immobilized synthetic oligonucleotides, we show here that additional sequence information can be gained when two recognition sites, rather than one, are employed within a single nanopore. PMID:20014084

  13. Towards a smart glove: arousal recognition based on textile Electrodermal Response.

    PubMed

    Valenza, Gaetano; Lanata, Antonio; Scilingo, Enzo Pasquale; De Rossi, Danilo

    2010-01-01

    This paper investigates the possibility of using Electrodermal Response, acquired by a sensing fabric glove with embedded textile electrodes, as reliable means for emotion recognition. Here, all the essential steps for an automatic recognition system are described, from the recording of physiological data set to a feature-based multiclass classification. Data were collected from 35 healthy volunteers during arousal elicitation by means of International Affective Picture System (IAPS) pictures. Experimental results show high discrimination after twenty steps of cross validation. PMID:21096840

  14. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  15. Parallel language activation and cognitive control during spoken word recognition in bilinguals.

    PubMed

    Blumenfeld, Henrike K; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals' parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300-500ms after word onset was associated with smaller Stroop effects; between 633-767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  16. Infrared image recognition based on structure sparse and atomic sparse parallel

    NASA Astrophysics Data System (ADS)

    Wu, Yalu; Li, Ruilong; Xu, Yi; Wang, Liping

    2015-12-01

    Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method. Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.

  17. Feature based recognition of submerged objects in holographic imagery

    NASA Astrophysics Data System (ADS)

    Ratto, Christopher R.; Beagley, Nathaniel; Baldwin, Kevin C.; Shipley, Kara R.; Sternberger, Wayne I.

    2014-05-01

    The ability to autonomously sense and characterize underwater objects in situ is desirable in applications of unmanned underwater vehicles (UUVs). In this work, underwater object recognition was explored using a digital holographic system. Two experiments were performed in which several objects of varying size, shape, and material were submerged in a 43,000 gallon test tank. Holograms were collected from each object at multiple distances and orientations, with the imager located either outside the tank (looking through a porthole) or submerged (looking downward). The resultant imagery from these holograms was preprocessed to improve dynamic range, mitigate speckle, and segment out the image of the object. A collection of feature descriptors were then extracted from the imagery to characterize various object properties (e.g., shape, reflectivity, texture). The features extracted from images of multiple objects, collected at different imaging geometries, were then used to train statistical models for object recognition tasks. The resulting classification models were used to perform object classification as well as estimation of various parameters of the imaging geometry. This information can then be used to inform the design of autonomous sensing algorithms for UUVs employing holographic imagers.

  18. A Comparative Study of 2D PCA Face Recognition Method with Other Statistically Based Face Recognition Methods

    NASA Astrophysics Data System (ADS)

    Senthilkumar, R.; Gnanamurthy, R. K.

    2016-09-01

    In this paper, two-dimensional principal component analysis (2D PCA) is compared with other algorithms like 1D PCA, Fisher discriminant analysis (FDA), independent component analysis (ICA) and Kernel PCA (KPCA) which are used for image representation and face recognition. As opposed to PCA, 2D PCA is based on 2D image matrices rather than 1D vectors, so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices and its Eigen vectors are derived for image feature extraction. To test 2D PCA and evaluate its performance, a series of experiments are performed on three face image databases: ORL, Senthil, and Yale face databases. The recognition rate across all trials higher using 2D PCA than PCA, FDA, ICA and KPCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2D PCA than PCA.

  19. Transparent Stretchable Self-Powered Patchable Sensor Platform with Ultrasensitive Recognition of Human Activities.

    PubMed

    Hwang, Byeong-Ung; Lee, Ju-Hyuck; Trung, Tran Quang; Roh, Eun; Kim, Do-Il; Kim, Sang-Woo; Lee, Nae-Eung

    2015-09-22

    Monitoring of human activities can provide clinically relevant information pertaining to disease diagnostics, preventive medicine, care for patients with chronic diseases, rehabilitation, and prosthetics. The recognition of strains on human skin, induced by subtle movements of muscles in the internal organs, such as the esophagus and trachea, and the motion of joints, was demonstrated using a self-powered patchable strain sensor platform, composed on multifunctional nanocomposites of low-density silver nanowires with a conductive elastomer of poly(3,4-ethylenedioxythiophene):polystyrenesulfonate/polyurethane, with high sensitivity, stretchability, and optical transparency. The ultra-low-power consumption of the sensor, integrated with both a supercapacitor and a triboelectric nanogenerator into a single transparent stretchable platform based on the same nanocomposites, results in a self-powered monitoring system for skin strain. The capability of the sensor to recognize a wide range of strain on skin has the potential for use in new areas of invisible stretchable electronics for human monitoring. A new type of transparent, stretchable, and ultrasensitive strain sensor based on a AgNW/PEDOT:PSS/PU nanocomposite was developed. The concept of a self-powered patchable sensor system integrated with a supercapacitor and a triboelectric nanogenerator that can be used universally as an autonomous invisible sensor system was used to detect the wide range of strain on human skin. PMID:26277994

  20. The Painful Face - Pain Expression Recognition Using Active Appearance Models.

    PubMed

    Ashraf, Ahmed Bilal; Lucey, Simon; Cohn, Jeffrey F; Chen, Tsuhan; Ambadar, Zara; Prkachin, Kenneth M; Solomon, Patricia E

    2009-10-01

    Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence- or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?

  1. Determining optimally orthogonal discriminant vectors in DCT domain for multiscale-based face recognition

    NASA Astrophysics Data System (ADS)

    Niu, Yanmin; Wang, Xuchu

    2011-02-01

    This paper presents a new face recognition method that extracts multiple discriminant features based on multiscale image enhancement technique and kernel-based orthogonal feature extraction improvements with several interesting characteristics. First, it can extract more discriminative multiscale face feature than traditional pixel-based or Gabor-based feature. Second, it can effectively deal with the small sample size problem as well as feature correlation problem by using eigenvalue decomposition on scatter matrices. Finally, the extractor handles nonlinearity efficiently by using kernel trick. Multiple recognition experiments on open face data set with comparison to several related methods show the effectiveness and superiority of the proposed method.

  2. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  3. Intent and error recognition as part of a knowledge-based cockpit assistant

    NASA Astrophysics Data System (ADS)

    Strohal, Michael; Onken, Reiner

    1998-03-01

    With the Crew Assistant Military Aircraft (CAMA) a knowledge- based cockpit assistant system for future military transport aircraft is developed and tested to enhance situation awareness. Human-centered automation was the central principal for the development of CAMA, an approach to achieve advanced man-machine interaction, mainly by enhancing situation awareness. The CAMA-module Pilot Intent and Error Recognition (PIER) evaluates the pilot's activities and mission events in order to interpret and understand the pilot's actions in the context of the flight situation. Expected crew actions based on the flight plan are compared with the actual behavior shown by the crew. If discrepancies are detected the PIER module tries to figure out, whether the deviation was caused erroneously or by a sensible intent. By monitoring pilot actions as well as the mission context, the system is able to compare the pilot's action with a set of behavioral hypotheses. In case of an intentional deviation from the flight plan, the module checks, whether the behavior matches to the given set of behavior patterns of the pilot. Intent recognition can increase man-machine synergy by anticipating a need for assistance pertinent to the pilot's intent without having a pilot request. The interpretation of all possible situations with respect to intent recognition in terms of a reasoning process is based on a set of decision rules. To cope with the need of inferencing under uncertainty a fuzzy-logic approach is used. A weakness of the fuzzy-logic approach lies in the possibly ill-defined boundaries of the fuzzy sets. Self-Organizing Maps (SOM) as introduced and elaborated on by T. Kohonen are applied to improve the fuzzy set data and rule base complying with observed pilot behavior. Hierarchical cluster analysis is used to locate clusters of similar patterns in the maps. As introduced by Pedrycz, every feature is evaluated using fuzzy sets for each designated cluster. This approach allows to

  4. Speaker-Adaptive Speech Recognition Based on Surface Electromyography

    NASA Astrophysics Data System (ADS)

    Wand, Michael; Schultz, Tanja

    We present our recent advances in silent speech interfaces using electromyographic signals that capture the movements of the human articulatory muscles at the skin surface for recognizing continuously spoken speech. Previous systems were limited to speaker- and session-dependent recognition tasks on small amounts of training and test data. In this article we present speaker-independent and speaker-adaptive training methods which allow us to use a large corpus of data from many speakers to train acoustic models more reliably. We use the speaker-dependent system as baseline, carefully tuning the data preprocessing and acoustic modeling. Then on our corpus we compare the performance of speaker-dependent and speaker-independent acoustic models and carry out model adaptation experiments.

  5. Mobile-based text recognition from water quality devices

    NASA Astrophysics Data System (ADS)

    Dhakal, Shanti; Rahnemoonfar, Maryam

    2015-03-01

    Measuring water quality of bays, estuaries, and gulfs is a complicated and time-consuming process. YSI Sonde is an instrument used to measure water quality parameters such as pH, temperature, salinity, and dissolved oxygen. This instrument is taken to water bodies in a boat trip and researchers note down different parameters displayed by the instrument's display monitor. In this project, a mobile application is developed for Android platform that allows a user to take a picture of the YSI Sonde monitor, extract text from the image and store it in a file on the phone. The image captured by the application is first processed to remove perspective distortion. Probabilistic Hough line transform is used to identify lines in the image and the corner of the image is then obtained by determining the intersection of the detected horizontal and vertical lines. The image is warped using the perspective transformation matrix, obtained from the corner points of the source image and the destination image, hence, removing the perspective distortion. Mathematical morphology operation, black-hat is used to correct the shading of the image. The image is binarized using Otsu's binarization technique and is then passed to the Optical Character Recognition (OCR) software for character recognition. The extracted information is stored in a file on the phone and can be retrieved later for analysis. The algorithm was tested on 60 different images of YSI Sonde with different perspective features and shading. Experimental results, in comparison to ground-truth results, demonstrate the effectiveness of the proposed method.

  6. Recognition of grasp types through principal components of DWT based EMG features.

    PubMed

    Kakoty, Nayan M; Hazarika, Shyamanta M

    2011-01-01

    With the advancement in machine learning and signal processing techniques, electromyogram (EMG) signals have increasingly gained importance in man-machine interaction. Multifingered hand prostheses using surface EMG for control has appeared in the market. However, EMG based control is still rudimentary, being limited to a few hand postures based on higher number of EMG channels. Moreover, control is non-intuitive, in the sense that the user is required to learn to associate muscle remnants actions to unrelated posture of the prosthesis. Herein lies the promise of a low channel EMG based grasp classification architecture for development of an embedded intelligent prosthetic controller. This paper reports classification of six grasp types used during 70% of daily living activities based on two channel forearm EMG. A feature vector through principal component analysis of discrete wavelet transform coefficients based features of the EMG signal is derived. Classification is through radial basis function kernel based support vector machine following preprocessing and maximum voluntary contraction normalization of EMG signals. 10-fold cross validation is done. We have achieved an average recognition rate of 97.5%.

  7. Face recognition in simulated prosthetic vision: face detection-based image processing strategies

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu

    2014-08-01

    Objective. Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Approach. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. Main results. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Significance. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.

  8. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  9. Human Activity Recognition from Smart-Phone Sensor Data using a Multi-Class Ensemble Learning in Home Monitoring.

    PubMed

    Ghose, Soumya; Mitra, Jhimli; Karunanithi, Mohan; Dowling, Jason

    2015-01-01

    Home monitoring of chronically ill or elderly patient can reduce frequent hospitalisations and hence provide improved quality of care at a reduced cost to the community, therefore reducing the burden on the healthcare system. Activity recognition of such patients is of high importance in such a design. In this work, a system for automatic human physical activity recognition from smart-phone inertial sensors data is proposed. An ensemble of decision trees framework is adopted to train and predict the multi-class human activity system. A comparison of our proposed method with a multi-class traditional support vector machine shows significant improvement in activity recognition accuracies.

  10. A Novel Word Based Arabic Handwritten Recognition System Using SVM Classifier

    NASA Astrophysics Data System (ADS)

    Khalifa, Mahmoud; Bingru, Yang

    Every language script has its structure, characteristic, and feature. Character based word recognition depends on the feature available to be extracted from character. Word based script recognition overcome the problem of character segmenting and can be applied for several languages (Arabic, Urdu, Farsi... est.). In this paper Arabic handwritten is classified as word based system. Firstly, words segmented and normalized in size to fit the DCT input. Then extract feature characteristic by computing the Euclidean distance between pairs of objects in n-by-m data matrix X. Based on the point's operator of extrema, feature was extracted. Then apply one to one-Class Support Vector Machines (SVMs) as a discriminative framework in order to address feature classification. The approach was tested with several public databases and we get high efficiency rate recognition.

  11. Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data.

    PubMed

    Munoz-Organero, Mario; Lotfi, Ahmad

    2016-01-01

    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063

  12. Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data

    PubMed Central

    Munoz-Organero, Mario; Lotfi, Ahmad

    2016-01-01

    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented. PMID:27618063

  13. Human Movement Recognition Based on the Stochastic Characterisation of Acceleration Data.

    PubMed

    Munoz-Organero, Mario; Lotfi, Ahmad

    2016-09-09

    Human activity recognition algorithms based on information obtained from wearable sensors are successfully applied in detecting many basic activities. Identified activities with time-stationary features are characterised inside a predefined temporal window by using different machine learning algorithms on extracted features from the measured data. Better accuracy, precision and recall levels could be achieved by combining the information from different sensors. However, detecting short and sporadic human movements, gestures and actions is still a challenging task. In this paper, a novel algorithm to detect human basic movements from wearable measured data is proposed and evaluated. The proposed algorithm is designed to minimise computational requirements while achieving acceptable accuracy levels based on characterising some particular points in the temporal series obtained from a single sensor. The underlying idea is that this algorithm would be implemented in the sensor device in order to pre-process the sensed data stream before sending the information to a central point combining the information from different sensors to improve accuracy levels. Intra- and inter-person validation is used for two particular cases: single step detection and fall detection and classification using a single tri-axial accelerometer. Relevant results for the above cases and pertinent conclusions are also presented.

  14. A Smartwatch-Based Assistance System for the Elderly Performing Fall Detection, Unusual Inactivity Recognition and Medication Reminding.

    PubMed

    Deutsch, Markus; Burgsteiner, Harald

    2016-01-01

    The growing number of elderly people in our society makes it increasingly important to help them live an independent and self-determined life up until a high age. A smartwatch-based assistance system should be implemented that is capable of automatically detecting emergencies and helping elderly people to adhere to their medical therapy. Using the acceleration data of a widely available smartwatch, we implemented fall detection and inactivity recognition based on a smartphone connected via Bluetooth. The resulting system is capable of performing fall detection, inactivity recognition, issuing medication reminders and alerting relatives upon manual activation. Though some challenges, like the dependence on a smartphone remain, the resulting system is a promising approach to help elderly people as well as their relatives to live independently and with a feeling of safety. PMID:27139412

  15. Activity based video indexing and search

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Jiang, Qin; Medasani, Swarup; Allen, David; Lu, Tsai-ching

    2010-04-01

    We describe a method for searching videos in large video databases based on the activity contents present in the videos. Being able to search videos based on the contents (such as human activities) has many applications such as security, surveillance, and other commercial applications such as on-line video search. Conventional video content-based retrieval (CBR) systems are either feature based or semantics based, with the former trying to model the dynamics video contents using the statistics of image features, and the latter relying on automated scene understanding of the video contents. Neither approach has been successful. Our approach is inspired by the success of visual vocabulary of "Video Google" by Sivic and Zisserman, and the work of Nister and Stewenius who showed that building a visual vocabulary tree can improve the performance in both scalability and retrieval accuracy for 2-D images. We apply visual vocabulary and vocabulary tree approach to spatio-temporal video descriptors for video indexing, and take advantage of the discrimination power of these descriptors as well as the scalability of vocabulary tree for indexing. Furthermore, this approach does not rely on any model-based activity recognition. In fact, training of the vocabulary tree is done off-line using unlabeled data with unsupervised learning. Therefore the approach is widely applicable. Experimental results using standard human activity recognition videos will be presented that demonstrate the feasibility of this approach.

  16. Facial recognition of happiness among older adults with active and remitted major depression.

    PubMed

    Shiroma, Paulo R; Thuras, Paul; Johns, Brian; Lim, Kelvin O

    2016-09-30

    Biased emotion processing in depression might be a trait characteristic independent of mood improvement and a vulnerable factor to develop further depressive episodes. This phenomenon of among older adults with depression has not been adequately examined. In a 2-year cross-sectional study, 59 older patients with either active or remitted major depression, or never-depressed, completed a facial emotion recognition task (FERT) to probe perceptual bias of happiness. The results showed that depressed patients, compared with never depressed subjects, had a significant lower sensitivity to identify happiness particularly at moderate intensity of facial stimuli. Patients in remission from a previous major depressive episode but with none or minimal symptoms had similar sensitivity rate to identify happy facial expressions as compared to patients with an active depressive episode. Further studies would be necessary to confirm whether recognition of happy expression reflects a persistent perceptual bias of major depression in older adults. PMID:27428081

  17. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  18. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations. PMID:23948388

  19. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy. PMID:19366643

  20. Hierarchical Vision-based Algorithm for Vehicle Model Type Recognition from Time-sequence Road Images

    NASA Astrophysics Data System (ADS)

    Zheng, Mingxie; Gotoh, Toshiyuki; Shiohara, Morito

    This paper describes a vision-based algorithm for recognizing the vehicle model type from time-sequence road images. Many types of vehicle models are offered commercially, and some of them are resemble in shape. This prevents us to discriminate their model types from the others easily. To solve these problems, we proposes a hierarchical recognition method with training process, in which the resemble model groups are firstly generated and the effective features to discriminate the models in the each group are then selected using the subspace method in training. In the recognition process, a front area is firstly detected from each frame of the input time-sequence images, then a hierarchical recognition which consists of a group and a category discrimination is performed. Finally, the results of frame recognition are integrated to realize stable recognition. The experimental results using time-sequence road images show the proposed method is effective: the recognition rate for the registered model types is more than 99%, and the rejection rate for unregistered vehicle type is more than 92%.

  1. Emotion recognition in frontotemporal dementia and Alzheimer's disease: A new film-based assessment.

    PubMed

    Goodkind, Madeleine S; Sturm, Virginia E; Ascher, Elizabeth A; Shdo, Suzanne M; Miller, Bruce L; Rankin, Katherine P; Levenson, Robert W

    2015-08-01

    Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests.

  2. Emotion Recognition in Frontotemporal Dementia and Alzheimer's Disease: A New Film-Based Assessment

    PubMed Central

    Goodkind, Madeleine S.; Sturm, Virginia E.; Ascher, Elizabeth A.; Shdo, Suzanne M.; Miller, Bruce L.; Rankin, Katherine P.; Levenson, Robert W.

    2015-01-01

    Deficits in recognizing others' emotions are reported in many psychiatric and neurological disorders, including autism, schizophrenia, behavioral variant frontotemporal dementia (bvFTD) and Alzheimer's disease (AD). Most previous emotion recognition studies have required participants to identify emotional expressions in photographs. This type of assessment differs from real-world emotion recognition in important ways: Images are static rather than dynamic, include only 1 modality of emotional information (i.e., visual information), and are presented absent a social context. Additionally, existing emotion recognition batteries typically include multiple negative emotions, but only 1 positive emotion (i.e., happiness) and no self-conscious emotions (e.g., embarrassment). We present initial results using a new task for assessing emotion recognition that was developed to address these limitations. In this task, respondents view a series of short film clips and are asked to identify the main characters' emotions. The task assesses multiple negative, positive, and self-conscious emotions based on information that is multimodal, dynamic, and socially embedded. We evaluate this approach in a sample of patients with bvFTD, AD, and normal controls. Results indicate that patients with bvFTD have emotion recognition deficits in all 3 categories of emotion compared to the other groups. These deficits were especially pronounced for negative and self-conscious emotions. Emotion recognition in this sample of patients with AD was indistinguishable from controls. These findings underscore the utility of this approach to assessing emotion recognition and suggest that previous findings that recognition of positive emotion was preserved in dementia patients may have resulted from the limited sampling of positive emotion in traditional tests. PMID:26010574

  3. The effects of negative emotion on encoding-related neural activity predicting item and source recognition.

    PubMed

    Yick, Yee Ying; Buratto, Luciano Grüdtner; Schaefer, Alexandre

    2015-07-01

    We report here a study that obtained reliable effects of emotional modulation of a well-known index of memory encoding--the electrophysiological "Dm" effect--using a recognition memory paradigm followed by a source memory task. In this study, participants performed an old-new recognition test of emotionally negative and neutral pictures encoded 1 day before the test, and a source memory task involving the retrieval of the temporal context in which pictures had been encoded. Our results showed that Dm activity was enhanced for all emotional items on a late positivity starting at ~400 ms post-stimulus onset, although Dm activity for high arousal items was also enhanced at an earlier stage (200-400 ms). Our results also showed that emotion enhanced Dm activity for items that were both recognised with or without correct source information. Further, when only high arousal items were considered, larger Dm amplitudes were observed if source memory was accurate. Three main conclusions are drawn from these findings. First, negative emotion can enhance encoding processes predicting the subsequent recognition of central item information. Second, if emotion reaches high levels of arousal, the encoding of contextual details can also be enhanced over and above the effects of emotion on central item encoding. Third, the morphology of our ERPs is consistent with a hybrid model of the role of attention in emotion-enhanced memory (Pottage and Schaefer, 2012).

  4. The effects of negative emotion on encoding-related neural activity predicting item and source recognition.

    PubMed

    Yick, Yee Ying; Buratto, Luciano Grüdtner; Schaefer, Alexandre

    2015-07-01

    We report here a study that obtained reliable effects of emotional modulation of a well-known index of memory encoding--the electrophysiological "Dm" effect--using a recognition memory paradigm followed by a source memory task. In this study, participants performed an old-new recognition test of emotionally negative and neutral pictures encoded 1 day before the test, and a source memory task involving the retrieval of the temporal context in which pictures had been encoded. Our results showed that Dm activity was enhanced for all emotional items on a late positivity starting at ~400 ms post-stimulus onset, although Dm activity for high arousal items was also enhanced at an earlier stage (200-400 ms). Our results also showed that emotion enhanced Dm activity for items that were both recognised with or without correct source information. Further, when only high arousal items were considered, larger Dm amplitudes were observed if source memory was accurate. Three main conclusions are drawn from these findings. First, negative emotion can enhance encoding processes predicting the subsequent recognition of central item information. Second, if emotion reaches high levels of arousal, the encoding of contextual details can also be enhanced over and above the effects of emotion on central item encoding. Third, the morphology of our ERPs is consistent with a hybrid model of the role of attention in emotion-enhanced memory (Pottage and Schaefer, 2012). PMID:25936685

  5. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2016-03-24

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.

  6. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors.

    PubMed

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available. PMID:27023543

  7. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors

    PubMed Central

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J. M.

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2–30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available. PMID:27023543

  8. Bimodal biometrics based on a representation and recognition approach

    NASA Astrophysics Data System (ADS)

    Xu, Yong; Zhong, Aini; Yang, Jian; Zhang, David

    2011-03-01

    It has been demonstrated that multibiometrics can produce higher accuracy than single biometrics. This is mainly because the use of multiple biometric traits of the subject enables more information to be used for identification or verification. In this paper, we focus on bimodal biometrics and propose a novel representation and recognition approach to bimodal biometrics. This approach first denotes the biometric trait sample by a complex vector. Then, it represents the test sample through the training samples and classifies the test sample as follows: let the test sample be expressed as a linear combination of all the training samples each being a complex vector. The proposed approach obtains the solution by solving a linear system. After evaluating the effect, in representing the test sample of each class, the approach classifies the test sample into the class that makes the greatest effect. The approach proposed is not only novel but also simple and computationally efficient. A large number of experiments show that our method can obtain promising results.

  9. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    PubMed

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-01-01

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system. PMID:27104534

  10. Speech Recognition-based and Automaticity Programs to Help Students with Severe Reading and Spelling Problems

    ERIC Educational Resources Information Center

    Higgins, Eleanor L.; Raskind, Marshall H.

    2004-01-01

    This study was conducted to assess the effectiveness of two programs developed by the Frostig Center Research Department to improve the reading and spelling of students with learning disabilities (LD): a computer Speech Recognition-based Program (SRBP) and a computer and text-based Automaticity Program (AP). Twenty-eight LD students with reading…

  11. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    ERIC Educational Resources Information Center

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  12. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    PubMed

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  13. A neural-network appearance-based 3-D object recognition using independent component analysis.

    PubMed

    Sahambi, H S; Khorasani, K

    2003-01-01

    This paper presents results on appearance-based three-dimensional (3-D) object recognition (3DOR) accomplished by utilizing a neural-network architecture developed based on independent component analysis (ICA). ICA has already been applied for face recognition in the literature with encouraging results. In this paper, we are exploring the possibility of utilizing the redundant information in the visual data to enhance the view based object recognition. The underlying premise here is that since ICA uses high-order statistics, it should in principle outperform principle component analysis (PCA), which does not utilize statistics higher than two, in the recognition task. Two databases of images captured by a CCD camera are used. It is demonstrated that ICA did perform better than PCA in one of the databases, but interestingly its performance was no better than PCA in the case of the second database. Thus, suggesting that the use of ICA may not necessarily always give better results than PCA, and that the application of ICA is highly data dependent. Various factors affecting the differences in the recognition performance using both methods are also discussed. PMID:18237997

  14. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  15. Locality Constrained Joint Dynamic Sparse Representation for Local Matching Based Face Recognition

    PubMed Central

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC. PMID:25419662

  16. Recognition of Physical Activities in Overweight Hispanic Youth Using KNOWME Networks

    PubMed Central

    Emken, BA; Li, M; Thatte, G; Lee, S; Annavaram, M; Mitra, U; Narayanan, S; Spruijt-Metz, D

    2011-01-01

    Background KNOWME Networks is a wireless body area network with two tri-axial accelerometers, a heart rate monitor, and mobile phone that acts as the data collection hub. One function of KNOWME Networks is to detect physical activity (PA) in overweight Hispanic youth. The purpose of this study was to evaluate the in-lab recognition accuracy of KNOWME. Methods Twenty overweight Hispanic participants (10 males; age 14.6±1.8 years), underwent four data collection sessions consisting of nine activities/session: lying down, sitting, sitting fidgeting, standing, standing fidgeting, standing playing an active video game, slow walking, brisk walking, and running. Data was used to train activity recognition models. The accuracy of personalized and generalized models is reported. Results Overall accuracy for personalized models was 84%. The most accurately detected activity was running (96%). The models had difficulty distinguishing between the static and fidgeting categories of sitting and standing. When static and fidgeting activity categories were collapsed, the overall accuracy improved to 94%. Personalized models demonstrated higher accuracy than generalized models. Conclusions KNOWME Networks can accurately detect a range of activities. KNOWME has the ability to collect and process data in real-time, building the foundation for tailored, real-time interventions to increase PA or decrease sedentary time. PMID:21934162

  17. Aminobenzohydrazide based colorimetric and 'turn-on' fluorescence chemosensor for selective recognition of fluoride.

    PubMed

    Anand, Thangaraj; Sivaraman, Gandhi; Iniya, Murugan; Siva, Ayyanar; Chellappa, Duraisamy

    2015-05-30

    Chemosensors based on aminobenzohydrazide Schiff bases bearing pyrene/anthracene as fluorophores have been designed and synthesized for F(-) ion recognition. The addition of fluoride ions to the receptors causes a dramatically observable colour change from pale yellow to brown/red. (1)H NMR studies confirm that the F(-) ion facilitates its recognition by forming hydrogen bond with hydrogens of amide and amine groups. Moreover these sensors have also been successfully applied to detection of fluoride ion in commercial tooth paste solution. PMID:25998453

  18. High-Precise and Robust Face-Recognition System Based on Optical Parallel Correlator

    NASA Astrophysics Data System (ADS)

    Kodate, Kashiko

    2005-10-01

    Facial recognition is applied in a wide range of security systems, and has been studied since the 1970s, with extensive research into and development of digital processing. However, there is only available a 1:1 verification system combined with ID card identification, or an ID-less system with a small number of images in the database. The number of images that can be stored is limited, and recognition has to be improved to account for photos taken at different angles. Commercially available facial recognition systems for the most part utilize digital computers performing electronic pattern recognition. In contrast, optical analog operations can process two-dimensional images instantaneously in parallel using a lens-based Fourier transform function. In the 1960s two methods were proposed, the Vanderlugt correlator and the joint transform correlator (JTC). We present a new scheme using a multi-channel parallel JTC to make better use of spatial parallelism, through the use of a diffraction-type multi-level zone-plate array to extend a single-channel JTC. Our project's objectives were: (i) to design a matched filter which equips the system with high recognition capability at a faster calculation speed by analyzing the spatial frequency of facial image elements, and (ii) to create a four-channel Vanderlugt correlator with super-high-speed (1000 frame/s) optical parallel facial recognition system, robust enough for 1:N identification, for a large database with 4000 images. Automation was also achieved for the entire process via a practical controlling system. The achieved super-high-speed facial recognition system based on optical parallelism is faster in its processing time than the JTC optical correlator.

  19. Many neighbors are not silent. fMRI evidence for global lexical activity in visual word recognition.

    PubMed

    Braun, Mario; Jacobs, Arthur M; Richlan, Fabio; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin

    2015-01-01

    Many neurocognitive studies investigated the neural correlates of visual word recognition, some of which manipulated the orthographic neighborhood density of words and nonwords believed to influence the activation of orthographically similar representations in a hypothetical mental lexicon. Previous neuroimaging research failed to find evidence for such global lexical activity associated with neighborhood density. Rather, effects were interpreted to reflect semantic or domain general processing. The present fMRI study revealed effects of lexicality, orthographic neighborhood density and a lexicality by orthographic neighborhood density interaction in a silent reading task. For the first time we found greater activity for words and nonwords with a high number of neighbors. We propose that this activity in the dorsomedial prefrontal cortex reflects activation of orthographically similar codes in verbal working memory thus providing evidence for global lexical activity as the basis of the neighborhood density effect. The interaction of lexicality by neighborhood density in the ventromedial prefrontal cortex showed lower activity in response to words with a high number compared to nonwords with a high number of neighbors. In the light of these results the facilitatory effect for words and inhibitory effect for nonwords with many neighbors observed in previous studies can be understood as being due to the operation of a fast-guess mechanism for words and a temporal deadline mechanism for nonwords as predicted by models of visual word recognition. Furthermore, we propose that the lexicality effect with higher activity for words compared to nonwords in inferior parietal and middle temporal cortex reflects the operation of an identification mechanism based on local lexico-semantic activity. PMID:26257634

  20. Many neighbors are not silent. fMRI evidence for global lexical activity in visual word recognition.

    PubMed

    Braun, Mario; Jacobs, Arthur M; Richlan, Fabio; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin

    2015-01-01

    Many neurocognitive studies investigated the neural correlates of visual word recognition, some of which manipulated the orthographic neighborhood density of words and nonwords believed to influence the activation of orthographically similar representations in a hypothetical mental lexicon. Previous neuroimaging research failed to find evidence for such global lexical activity associated with neighborhood density. Rather, effects were interpreted to reflect semantic or domain general processing. The present fMRI study revealed effects of lexicality, orthographic neighborhood density and a lexicality by orthographic neighborhood density interaction in a silent reading task. For the first time we found greater activity for words and nonwords with a high number of neighbors. We propose that this activity in the dorsomedial prefrontal cortex reflects activation of orthographically similar codes in verbal working memory thus providing evidence for global lexical activity as the basis of the neighborhood density effect. The interaction of lexicality by neighborhood density in the ventromedial prefrontal cortex showed lower activity in response to words with a high number compared to nonwords with a high number of neighbors. In the light of these results the facilitatory effect for words and inhibitory effect for nonwords with many neighbors observed in previous studies can be understood as being due to the operation of a fast-guess mechanism for words and a temporal deadline mechanism for nonwords as predicted by models of visual word recognition. Furthermore, we propose that the lexicality effect with higher activity for words compared to nonwords in inferior parietal and middle temporal cortex reflects the operation of an identification mechanism based on local lexico-semantic activity.

  1. Equivalent activation of the hippocampus by face-face and face-laugh paired associate learning and recognition.

    PubMed

    Holdstock, J S; Crane, J; Bachorowski, J-A; Milner, B

    2010-11-01

    The human hippocampus is known to play an important role in relational memory. Both patient lesion studies and functional-imaging studies have shown that it is involved in the encoding and retrieval from memory of arbitrary associations. Two recent patient lesion studies, however, have found dissociations between spared and impaired memory within the domain of relational memory. Recognition of associations between information of the same kind (e.g., two faces) was spared, whereas recognition of associations between information of different kinds (e.g., face-name or face-voice associations) was impaired by hippocampal lesions. Thus, recognition of associations between information of the same kind may not be mediated by the hippocampus. Few imaging studies have directly compared activation at encoding and recognition of associations between same and different types of information. Those that have have shown mixed findings and been open to alternative interpretation. We used fMRI to compare hippocampal activation while participants studied and later recognized face-face and face-laugh paired associates. We found no differences in hippocampal activation between our two types of stimulus materials during either study or recognition. Study of both types of paired associate activated the hippocampus bilaterally, but the hippocampus was not activated by either condition during recognition. Our findings suggest that the human hippocampus is normally engaged to a similar extent by study and recognition of associations between information of the same kind and associations between information of different kinds.

  2. Model-based vision system for automatic recognition of structures in dental radiographs

    NASA Astrophysics Data System (ADS)

    Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.

    1991-07-01

    X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.

  3. Form-based priming in spoken word recognition: the roles of competition and bias.

    PubMed

    Goldinger, S D; Luce, P A; Pisoni, D B; Marcario, J K

    1992-11-01

    Phonological priming of spoken words refers to improved recognition of targets preceded by primes that share at least one of their constituent phonemes (e.g., BULL-BEER). Phonetic priming refers to reduced recognition of targets preceded by primes that share no phonemes with targets but are phonetically similar to targets (e.g., BULL-VEER). Five experiments were conducted to investigate the role of bias in phonological priming. Performance was compared across conditions of phonological and phonetic priming under a variety of procedural manipulations. Ss in phonological priming conditions systematically modified their responses on unrelated priming trials in perceptual identification, and they were slower and more errorful on unrelated trials in lexical decision than were Ss in phonetic priming conditions. Phonetic and phonological priming effects display different time courses and also different interactions with changes in proportion of related priming trials. Phonological priming involves bias; phonetic priming appears to reflect basic properties of activation and competition in spoken word recognition.

  4. Development of young oil palm tree recognition using Haar- based rectangular windows

    NASA Astrophysics Data System (ADS)

    Daliman, S.; Abu-Bakar, S. A. R.; Nor Azam, S. H. Md

    2016-06-01

    This paper presents development of Haar-based rectangular windows for recognition of young oil palm tree based on WorldView-2 imagery data. Haar-based rectangular windows or also known as Haar-like rectangular features have been popular in face recognition as used in Viola-Jones object detection framework. Similar to face recognition, the oil palm tree recognition would also need a suitable Haar-based rectangular windows that best suit to the characteristics of oil palm tree. A set of seven Haar-based rectangular windows have been designed to better match specifically the young oil palm tree as the crown size is much smaller compared to the matured ones. Determination of features for oil palm tree is an essential task to ensure a high successful rate of correct oil palm tree detection. Furthermore, features that reflects the identification of oil palm tree indicate distinctiveness between an oil palm tree and other objects in the image such as buildings, roads and drainage. These features will be trained using support vector machine (SVM) to model the oil palm tree for classifying the testing set and subimages of WorldView-2 imagery data. The resulting classification of young oil palm tree with sensitivity of 98.58% and accuracy of 92.73% shows a promising result that it can be used for intention of developing automatic young oil palm tree counting.

  5. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  6. Theory of simple biochemical ``shape recognition'' via diffusion from activator coated nanoshapes

    NASA Astrophysics Data System (ADS)

    Daniels, D. R.

    2008-09-01

    Inspired by recent experiments, we model the shape sensitivity, via a typical threshold initiation response, of an underlying complex biochemical reaction network to activator coated nanoshapes. Our theory re-emphasizes that shape effects can be vitally important for the onset of functional behavior in nanopatches and nanoparticles. For certain critical or particular shapes, activator coated nanoshapes do not evoke a threshold response in a complex biochemical network setting, while for different critical or specific shapes, the threshold response is rapidly achieved. The model thus provides a general theoretical understanding for how activator coated nanoshapes can enable a chemical system to perform simple "shape recognition," with an associated "all or nothing" response. The novel and interesting cases of the chemical response due to a nanoshape that shrinks with time is additionally considered, as well as activator coated nanospheres. Possible important applications of this work include the initiation of blood clotting by nanoshapes, nanoshape effects in nanocatalysis, physiological toxicity to nanoparticles, as well as nanoshapes in nanomedicine, drug delivery, and T cell immunological response. The aim of the theory presented here is that it inspires further experimentation on simple biochemical shape recognition via diffusion from activator coated nanoshapes.

  7. Surface versus Edge-Based Determinants of Visual Recognition.

    ERIC Educational Resources Information Center

    Biederman, Irving; Ju, Ginny

    1988-01-01

    The latency at which objects could be identified by 126 subjects was compared through line drawings (edge-based) or color photography (surface depiction). The line drawing was identified about as quickly as the photograph; primal access to a mental representation of an object can be modeled from an edge-based description. (SLD)

  8. Gels based on anion recognition between triurea receptor and phosphate anion.

    PubMed

    Yang, Cuiling; Wu, Biao; Chen, Yongming; Zhang, Ke

    2015-04-01

    Anion recognition between the triurea receptor and phosphate anion is demonstrated as the cross-linkage to build supramolecular polymer gels for the first time. A novel multi-block copolymer (3) is designed to have functional triurea groups as cross-linking units along the polymer main chain. By virtue of anion coordination between the triurea receptor and phosphate anion with a binding mode of 2:1, supramolecular polymer gels are then prepared based on anion recognition using 3 as the building block. PMID:25694389

  9. Note: Gaussian mixture model for event recognition in optical time-domain reflectometry based sensing systems.

    PubMed

    Fedorov, A K; Anufriev, M N; Zhirnov, A A; Stepanov, K V; Nesterov, E T; Namiot, D E; Karasik, V E; Pnev, A B

    2016-03-01

    We propose a novel approach to the recognition of particular classes of non-conventional events in signals from phase-sensitive optical time-domain-reflectometry-based sensors. Our algorithmic solution has two main features: filtering aimed at the de-nosing of signals and a Gaussian mixture model to cluster them. We test the proposed algorithm using experimentally measured signals. The results show that two classes of events can be distinguished with the best-case recognition probability close to 0.9 at sufficient numbers of training samples. PMID:27036840

  10. KD-tree based clustering algorithm for fast face recognition on large-scale data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan; Lin, Yaping; Yang, Junfeng

    2015-07-01

    This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.

  11. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  12. Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung

    2010-06-01

    Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.

  13. Dialog-Based 3D-Image Recognition Using a Domain Ontology

    NASA Astrophysics Data System (ADS)

    Hois, Joana; Wünstel, Michael; Bateman, John A.; Röfer, Thomas

    The combination of vision and speech, together with the resulting necessity for formal representations, builds a central component of an autonomous system. A robot that is supposed to navigate autonomously through space must be able to perceive its environment as automatically as possible. But each recognition system has its own inherent limits. Especially a robot whose task is to navigate through unknown terrain has to deal with unidentified or even unknown objects, thus compounding the recognition problem still further. The system described in this paper takes this into account by trying to identify objects based on their functionality where possible. To handle cases where recognition is insufficient, we examine here two further strategies: on the one hand, the linguistic reference and labeling of the unidentified objects and, on the other hand, ontological deduction. This approach then connects the probabilistic area of object recognition with the logical area of formal reasoning. In order to support formal reasoning, additional relational scene information has to be supplied by the recognition system. Moreover, for a sound ontological basis for these reasoning tasks, it is necessary to define a domain ontology that provides for the representation of real-world objects and their corresponding spatial relations in linguistic and physical respects. Physical spatial relations and objects are measured by the visual system, whereas linguistic spatial relations and objects are required for interactions with a user.

  14. Genome filtering using methylation-sensitive restriction enzymes with six-base pair recognition sites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The large fraction of repetitive DNA in many plant genomes has complicated all aspects of DNA sequencing and assembly, and thus techniques that enrich for genes and low-copy sequences have been employed to isolate gene space. Methyl sensitive restriction enzymes with six base pair recognition sites...

  15. A Computer-Based Gaming System for Assessing Recognition Performance (RECOG).

    ERIC Educational Resources Information Center

    Little, Glenn A.; And Others

    This report documents a computer-based gaming system for assessing recognition performance (RECOG). The game management system is programmed in a modular manner to: instruct the student on how to play the game, retrieve and display individual images, keep track of how well individuals play and provide them feedback, and link these components by…

  16. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Application for recognition based on certification. 52.20 Section 52.20 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining...

  17. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Application for recognition based on certification. 52.20 Section 52.20 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining...

  18. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2013-07-01 2013-07-01 false Application for recognition based on certification. 52.20 Section 52.20 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining...

  19. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Application for recognition based on certification. 52.20 Section 52.20 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining...

  20. When Does Modality Matter? Perceptual versus Conceptual Fluency-Based Illusions in Recognition Memory

    ERIC Educational Resources Information Center

    Miller, Jeremy K.; Lloyd, Marianne E.; Westerman, Deanne L.

    2008-01-01

    Previous research has shown that illusions of recognition memory based on enhanced perceptual fluency are sensitive to the perceptual match between the study and test phases of an experiment. The results of the current study strengthen that conclusion, as they show that participants will not interpret enhanced perceptual fluency as a sign of…

  1. Evaluating Automatic Speech Recognition-Based Language Learning Systems: A Case Study

    ERIC Educational Resources Information Center

    van Doremalen, Joost; Boves, Lou; Colpaert, Jozef; Cucchiarini, Catia; Strik, Helmer

    2016-01-01

    The purpose of this research was to evaluate a prototype of an automatic speech recognition (ASR)-based language learning system that provides feedback on different aspects of speaking performance (pronunciation, morphology and syntax) to students of Dutch as a second language. We carried out usability reviews, expert reviews and user tests to…

  2. A photochromic supramolecular polymer based on bis-p-sulfonatocalix[4]arene recognition in aqueous solution.

    PubMed

    Yao, Xuyang; Li, Teng; Wang, Sheng; Ma, Xiang; Tian, He

    2014-07-11

    A photochromic supramolecular polymer based on bis-p-sulfonatocalix[4]arene recognition with a dithienylethene derivative in aqueous solution was fabricated. The resultant polymer showed good photochromic behaviour with obvious colour switching and a morphology change under alternative UV/Vis light stimuli. PMID:24853232

  3. 38 CFR 51.10 - Per diem based on recognition and certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.10 Per diem based on recognition and certification. VA will pay per diem to a State for providing nursing home care to eligible veterans in a facility if...

  4. 38 CFR 51.10 - Per diem based on recognition and certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.10 Per diem based on recognition and certification. VA will pay per diem to a State for providing nursing home care to eligible veterans in a facility if...

  5. 38 CFR 51.10 - Per diem based on recognition and certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.10 Per diem based on recognition and certification. VA will pay per diem to a State for providing nursing home care to eligible veterans in a facility if...

  6. 38 CFR 51.10 - Per diem based on recognition and certification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.10 Per diem based on recognition and certification. VA will pay per diem to a State for providing nursing home care to eligible veterans in a facility if...

  7. 38 CFR 51.10 - Per diem based on recognition and certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.10 Per diem based on recognition and certification. VA will pay per diem to a State for providing nursing home care to eligible veterans in a facility if...

  8. 38 CFR 52.10 - Per diem based on recognition and certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Adult Day Health Care in State Homes § 52.10 Per diem based on recognition and certification. VA will pay per diem to a State for providing adult day health care to eligible veterans in...

  9. Culture but not gender modulates amygdala activation during explicit emotion recognition

    PubMed Central

    2012-01-01

    Background Mounting evidence indicates that humans have significant difficulties in understanding emotional expressions from individuals of different ethnic backgrounds, leading to reduced recognition accuracy and stronger amygdala activation. However, the impact of gender on the behavioral and neural reactions during the initial phase of cultural assimilation has not been addressed. Therefore, we investigated 24 Asians students (12 females) and 24 age-matched European students (12 females) during an explicit emotion recognition task, using Caucasian facial expressions only, on a high-field MRI scanner. Results Analysis of functional data revealed bilateral amygdala activation to emotional expressions in Asian and European subjects. However, in the Asian sample, a stronger response of the amygdala emerged and was paralleled by reduced recognition accuracy, particularly for angry male faces. Moreover, no significant gender difference emerged. We also observed a significant inverse correlation between duration of stay and amygdala activation. Conclusion In this study we investigated the “alien-effect” as an initial problem during cultural assimilation and examined this effect on a behavioral and neural level. This study has revealed bilateral amygdala activation to emotional expressions in Asian and European females and males. In the Asian sample, a stronger response of the amygdala bilaterally was observed and this was paralleled by reduced performance, especially for anger and disgust depicted by male expressions. However, no gender difference occurred. Taken together, while gender exerts only a subtle effect, culture and duration of stay as well as gender of poser are shown to be relevant factors for emotion processing, influencing not only behavioral but also neural responses in female and male immigrants. PMID:22642400

  10. Robust Radar Emitter Recognition Based on the Three-Dimensional Distribution Feature and Transfer Learning

    PubMed Central

    Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam

    2016-01-01

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches. PMID:26927111

  11. Robust Radar Emitter Recognition Based on the Three-Dimensional Distribution Feature and Transfer Learning.

    PubMed

    Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam

    2016-02-25

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches.

  12. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  13. Robust Radar Emitter Recognition Based on the Three-Dimensional Distribution Feature and Transfer Learning.

    PubMed

    Yang, Zhutian; Qiu, Wei; Sun, Hongjian; Nallanathan, Arumugam

    2016-01-01

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for radar emitter signal recognition. To address this challenge, multi-component radar emitter recognition under a complicated noise environment is studied in this paper. A novel radar emitter recognition approach based on the three-dimensional distribution feature and transfer learning is proposed. The cubic feature for the time-frequency-energy distribution is proposed to describe the intra-pulse modulation information of radar emitters. Furthermore, the feature is reconstructed by using transfer learning in order to obtain the robust feature against signal noise rate (SNR) variation. Last, but not the least, the relevance vector machine is used to classify radar emitter signals. Simulations demonstrate that the approach proposed in this paper has better performances in accuracy and robustness than existing approaches. PMID:26927111

  14. Intensity Variation Normalization for Finger Vein Recognition Using Guided Filter Based Singe Scale Retinex

    PubMed Central

    Xie, Shan Juan; Lu, Yu; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2015-01-01

    Finger vein recognition has been considered one of the most promising biometrics for personal authentication. However, the capacities and percentages of finger tissues (e.g., bone, muscle, ligament, water, fat, etc.) vary person by person. This usually causes poor quality of finger vein images, therefore degrading the performance of finger vein recognition systems (FVRSs). In this paper, the intrinsic factors of finger tissue causing poor quality of finger vein images are analyzed, and an intensity variation (IV) normalization method using guided filter based single scale retinex (GFSSR) is proposed for finger vein image enhancement. The experimental results on two public datasets demonstrate the effectiveness of the proposed method in enhancing the image quality and finger vein recognition accuracy. PMID:26184226

  15. Study on the classification algorithm of degree of arteriosclerosis based on fuzzy pattern recognition

    NASA Astrophysics Data System (ADS)

    Ding, Li; Zhou, Runjing; Liu, Guiying

    2010-08-01

    Pulse wave of human body contains large amount of physiological and pathological information, so the degree of arteriosclerosis classification algorithm is study based on fuzzy pattern recognition in this paper. Taking the human's pulse wave as the research object, we can extract the characteristic of time and frequency domain of pulse signal, and select the parameters with a better clustering effect for arteriosclerosis identification. Moreover, the validity of characteristic parameters is verified by fuzzy ISODATA clustering method (FISOCM). Finally, fuzzy pattern recognition system can quantitatively distinguish the degree of arteriosclerosis with patients. By testing the 50 samples in the built pulse database, the experimental result shows that the algorithm is practical and achieves a good classification recognition result.

  16. A Genetic-Algorithm-Based Explicit Description of Object Contour and its Ability to Facilitate Recognition.

    PubMed

    Wei, Hui; Tang, Xue-Song

    2015-11-01

    Shape representation is an extremely important and longstanding problem in the field of pattern recognition. Closed contour, which refers to shape contour, plays a crucial role in the comparison of shapes. Because shape contour is the most stable, distinguishable, and invariable feature of an object, it is useful to incorporate it into the recognition process. This paper proposes a method based on genetic algorithms. The proposed method can be used to identify the most common contour fragments, which can be used to represent the contours of a shape category. The common fragments clarify the particular logics included in the contours. This paper shows that the explicit representation of the shape contour contributes significantly to shape representation and object recognition.

  17. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  18. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  19. Impact of lead sub-chronic toxicity on recognition memory and motor activity of Wistar rat.

    PubMed

    Azzaoui, F Z; Ahami, A O T; Khadmaoui, A

    2009-01-15

    The aim of this research was to investigate the impact of lead nitrate administered in drinking water during 90 days (sub-chronic toxicity), on body weight gain, motor activity, brain lead accumulation and especially on recognition memory of Wistar rats. Two groups of young female Wistar rats were used. Treated rats received 20 mg L(-1) of lead nitrate diluted in drinking water, while control rats received drinking water only, for 3 months. An evolution of body weight, motor activity, object recognition memory and measure of brain lead levels has been evaluated. The body weight was taken weekly, whereas the memory abilities and the motor activity are measured once every fortnight alternatively, by submitting rats to the Open Field (OF) test and to the Novel Object Recognizing (NOR) memory test. The results have shown a non significant effect in gain of body weight. However, a high significance was shown for horizontal activity (p<0.01), long memory term (p<0.01), at the end of testing period and for brain lead levels (p<0.05) between studied groups.

  20. Enhanced iris recognition method based on multi-unit iris images

    NASA Astrophysics Data System (ADS)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the

  1. Musical expertise affects neural bases of letter recognition.

    PubMed

    Proverbio, Alice Mado; Manfredi, Mirella; Zani, Alberto; Adorni, Roberta

    2013-02-01

    It is known that early music learning (playing of an instrument) modifies functional brain structure (both white and gray matter) and connectivity, especially callosal transfer, motor control/coordination and auditory processing. We compared visual processing of notes and words in 15 professional musicians and 15 controls by recording their synchronized bioelectrical activity (ERPs) in response to words and notes. We found that musical training in childhood (from age ~8 years) modifies neural mechanisms of word reading, whatever the genetic predisposition, which was unknown. While letter processing was strongly left-lateralized in controls, the fusiform (BA37) and inferior occipital gyri (BA18) were activated in both hemispheres in musicians for both word and music processing. The evidence that the neural mechanism of letter processing differed in musicians and controls (being absolutely bilateral in musicians) suggests that musical expertise modifies the neural mechanisms of letter reading.

  2. A mixture of physicochemical and evolutionary-based feature extraction approaches for protein fold recognition.

    PubMed

    Dehzangi, Abdollah; Sharma, Alok; Lyons, James; Paliwal, Kuldip K; Sattar, Abdul

    2015-01-01

    Recent advancement in the pattern recognition field stimulates enormous interest in Protein Fold Recognition (PFR). PFR is considered as a crucial step towards protein structure prediction and drug design. Despite all the recent achievements, the PFR still remains as an unsolved issue in biological science and its prediction accuracy still remains unsatisfactory. Furthermore, the impact of using a wide range of physicochemical-based attributes on the PFR has not been adequately explored. In this study, we propose a novel mixture of physicochemical and evolutionary-based feature extraction methods based on the concepts of segmented distribution and density. We also explore the impact of 55 different physicochemical-based attributes on the PFR. Our results show that by providing more local discriminatory information as well as obtaining benefit from both physicochemical and evolutionary-based features simultaneously, we can enhance the protein fold prediction accuracy up to 5% better than previously reported results found in the literature.

  3. Combining feature- and correspondence-based methods for visual object recognition.

    PubMed

    Westphal, Günter; Würtz, Rolf P

    2009-07-01

    We present an object recognition system built on a combination of feature- and correspondence-based pattern recognizers. The feature-based part, called preselection network, is a single-layer feedforward network weighted with the amount of information contributed by each feature to the decision at hand. For processing arbitrary objects, we employ small, regular graphs whose nodes are attributed with Gabor amplitudes, termed parquet graphs. The preselection network can quickly rule out most irrelevant matches and leaves only the ambiguous cases, so-called model candidates, to be verified by a rudimentary version of elastic graph matching, a standard correspondence-based technique for face and object recognition. According to the model, graphs are constructed that describe the object in the input image well. We report the results of experiments on standard databases for object recognition. The method achieved high recognition rates on identity and pose. Unlike many other models, it can also cope with varying background, multiple objects, and partial occlusion.

  4. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy. PMID:25478588

  5. Robust and Effective Component-based Banknote Recognition by SURF Features

    PubMed Central

    Hasanuzzaman, Faiz M.; Yang, Xiaodong; Tian, YingLi

    2013-01-01

    Camera-based computer vision technology is able to assist visually impaired people to automatically recognize banknotes. A good banknote recognition algorithm for blind or visually impaired people should have the following features: 1) 100% accuracy, and 2) robustness to various conditions in different environments and occlusions. Most existing algorithms of banknote recognition are limited to work for restricted conditions. In this paper we propose a component-based framework for banknote recognition by using Speeded Up Robust Features (SURF). The component-based framework is effective in collecting more class-specific information and robust in dealing with partial occlusion and viewpoint changes. Furthermore, the evaluation of SURF demonstrates its effectiveness in handling background noise, image rotation, scale, and illumination changes. To authenticate the robustness and generalizability of the proposed approach, we have collected a large dataset of banknotes from a variety of conditions including occlusion, cluttered background, rotation, and changes of illumination, scaling, and viewpoints. The proposed algorithm achieves 100% recognition rate on our challenging dataset. PMID:25531008

  6. Enhancing Speech Recognition Using Improved Particle Swarm Optimization Based Hidden Markov Model

    PubMed Central

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy. PMID:25478588

  7. Word Recognition Reflects Dimension-Based Statistical Learning

    ERIC Educational Resources Information Center

    Idemaru, Kaori; Holt, Lori L.

    2011-01-01

    Speech processing requires sensitivity to long-term regularities of the native language yet demands listeners to flexibly adapt to perturbations that arise from talker idiosyncrasies such as nonnative accent. The present experiments investigate whether listeners exhibit "dimension-based statistical learning" of correlations between acoustic…

  8. Development of a PLATO Based Curriculum for Tactile Speech Recognition.

    ERIC Educational Resources Information Center

    Saunders, Frank A.; And Others

    1978-01-01

    Describes a PLATO-based curriculum for teaching profoundly deaf children to understand speech sounds, which are presented as touch patterns on the abdomen. PLATO's auditory disk output is used to speak words and phrases which are converted to touch patterns via a new sensory aid, the teletactor. (Author/JEG)

  9. Recognition-Based Physical Response to Facilitate EFL Learning

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Shih, Timothy K.; Yeh, Shih-Ching; Chou, Ke-Chien; Ma, Zhao-Heng; Sommool, Worapot

    2014-01-01

    This study, based on total physical response and cognitive psychology, proposed a Kinesthetic English Learning System (KELS), which utilized Microsoft's Kinect technology to build kinesthetic interaction with life-related contexts in English. A subject test with 39 tenth-grade students was conducted following empirical research method in…

  10. The effect of involuntary motor activity on myoelectric pattern recognition: a case study with chronic stroke patients

    NASA Astrophysics Data System (ADS)

    Zhang, Xu; Li, Yun; Chen, Xiang; Li, Guanglin; Zev Rymer, William; Zhou, Ping

    2013-08-01

    Objective. This study investigates the effect of the involuntary motor activity of paretic-spastic muscles on the classification of surface electromyography (EMG) signals. Approach. Two data collection sessions were designed for 8 stroke subjects to voluntarily perform 11 functional movements using their affected forearm and hand at relatively slow and fast speeds. For each stroke subject, the degree of involuntary motor activity present in the voluntary surface EMG recordings was qualitatively described from such slow and fast experimental protocols. Myoelectric pattern recognition analysis was performed using different combinations of voluntary surface EMG data recorded from the slow and fast sessions. Main results. Across all tested stroke subjects, our results revealed that when involuntary surface EMG is absent or present in both the training and testing datasets, high accuracies (>96%, >98%, respectively, averaged over all the subjects) can be achieved in the classification of different movements using surface EMG signals from paretic muscles. When involuntary surface EMG was solely involved in either the training or testing datasets, the classification accuracies were dramatically reduced (<89%, <85%, respectively). However, if both the training and testing datasets contained EMG signals with the presence and absence of involuntary EMG interference, high accuracies were still achieved (>97%). Significance. The findings of this study can be used to guide the appropriate design and implementation of myoelectric pattern recognition based systems or devices toward promoting robot-aided therapy for stroke rehabilitation.

  11. Tuning sensitivity of CAR to EGFR density limits recognition of normal tissue while maintaining potent anti-tumor activity

    PubMed Central

    Caruso, Hillary G.; Hurton, Lenka V.; Najjar, Amer; Rushworth, David; Ang, Sonny; Olivares, Simon; Mi, Tiejuan; Switzer, Kirsten; Singh, Harjeet; Huls, Helen; Lee, Dean A.; Heimberger, Amy B.; Champlin, Richard E.; Cooper, Laurence J. N.

    2015-01-01

    Many tumors over express tumor-associated antigens relative to normal tissue, such as epidermal growth factor receptor (EGFR). This limits targeting by human T cells modified to express chimeric antigen receptors (CARs) due to potential for deleterious recognition of normal cells. We sought to generate CAR+ T cells capable of distinguishing malignant from normal cells based on the disparate density of EGFR expression by generating two CARs from monoclonal antibodies which differ in affinity. T cells with low affinity Nimo-CAR selectively targeted cells over-expressing EGFR, but exhibited diminished effector function as the density of EGFR decreased. In contrast, the activation of T cells bearing high affinity Cetux-CAR was not impacted by the density of EGFR. In summary, we describe the generation of CARs able to tune T-cell activity to the level of EGFR expression in which a CAR with reduced affinity enabled T cells to distinguish malignant from non-malignant cells. PMID:26330164

  12. Oxoanion Recognition by Benzene-based Tripodal Pyrrolic Receptors

    SciTech Connect

    Bill, Nathan; Kim, Dae-Sik; Kim, Sung Kuk; Park, Jung Su; Lynch, Vincent M.; Young, Neil J; Hay, Benjamin; Yang, Youjun; Anslyn, Eric; Sessler, Jonathan L.

    2012-01-01

    Two new tripodal receptors based on pyrrole- and dipyrromethane-functionalised derivatives of a sterically geared precursor, 1,3,5-tris(aminomethyl)-2,4,6-triethylbenzene, are reported; these systems, compounds 1 and 2, display high affinity and selectivity for tetrahedral anionic guests, in particular dihydrogen phosphate, pyrophosphate and hydrogen sulphate, in acetonitrile as inferred from isothermal titration calorimetry measurements. Support for the anion-binding ability of these systems comes from theoretical calculations and a single-crystal X-ray diffraction structure of the 2:2 (host:guest) dihydrogen phosphate complex is obtained in the case of the pyrrole-based receptor system, 1. Keywords anion receptors, dihydrogen phosphate, hydrogen sulphate, X-ray structure, theoretical calculations.

  13. Activation of wingless targets requires bipartite recognition of DNA by TCF.

    PubMed

    Chang, Mikyung V; Chang, Jinhee L; Gangopadhyay, Anu; Shearer, Andrew; Cadigan, Ken M

    2008-12-01

    Specific recognition of DNA by transcription factors is essential for precise gene regulation. In Wingless (Wg) signaling in Drosophila, target gene regulation is controlled by T cell factor (TCF), which binds to specific DNA sequences through a high mobility group (HMG) domain. However, there is considerable variability in TCF binding sites, raising the possibility that they are not sufficient for target location. Some isoforms of human TCF contain a domain, termed the C-clamp, that mediates binding to an extended sequence in vitro. However, the significance of this extended sequence for the function of Wnt response elements (WREs) is unclear. In this report, we identify a cis-regulatory element that, to our knowledge, was previously unpublished. The element, named the TCF Helper site (Helper site), is essential for the activation of several WREs. This motif greatly augments the ability of TCF binding sites to respond to Wg signaling. Drosophila TCF contains a C-clamp that enhances in vitro binding to TCF-Helper site pairs and is required for transcriptional activation of WREs containing Helper sites. A genome-wide search for clusters of TCF and Helper sites identified two new WREs. Our data suggest that DNA recognition by fly TCF occurs through a bipartite mechanism, involving both the HMG domain and the C-clamp, which enables TCF to locate and activate WREs in the nucleus. PMID:19062282

  14. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  15. Robust Face Recognition via Minimum Error Entropy-Based Atomic Representation.

    PubMed

    Wang, Yulong; Tang, Yuan Yan; Li, Luoqing

    2015-12-01

    Representation-based classifiers (RCs) have attracted considerable attention in face recognition in recent years. However, most existing RCs use the mean square error (MSE) criterion as the cost function, which relies on the Gaussianity assumption of the error distribution and is sensitive to non-Gaussian noise. This may severely degrade the performance of MSE-based RCs in recognizing facial images with random occlusion and corruption. In this paper, we present a minimum error entropy-based atomic representation (MEEAR) framework for face recognition. Unlike existing MSE-based RCs, our framework is based on the minimum error entropy criterion, which is not dependent on the error distribution and shown to be more robust to noise. In particular, MEEAR can produce discriminative representation vector by minimizing the atomic norm regularized Renyi's entropy of the reconstruction error. The optimality conditions are provided for general atomic representation model. As a general framework, MEEAR can also be used as a platform to develop new classifiers. Two effective MEE-based RCs are proposed by defining appropriate atomic sets. The experimental results on popular face databases show that MEEAR can improve both the recognition accuracy and the reconstructed results compared with the state-of-the-art MSE-based RCs. PMID:26513784

  16. Robust Face Recognition via Minimum Error Entropy-Based Atomic Representation.

    PubMed

    Wang, Yulong; Tang, Yuan Yan; Li, Luoqing

    2015-12-01

    Representation-based classifiers (RCs) have attracted considerable attention in face recognition in recent years. However, most existing RCs use the mean square error (MSE) criterion as the cost function, which relies on the Gaussianity assumption of the error distribution and is sensitive to non-Gaussian noise. This may severely degrade the performance of MSE-based RCs in recognizing facial images with random occlusion and corruption. In this paper, we present a minimum error entropy-based atomic representation (MEEAR) framework for face recognition. Unlike existing MSE-based RCs, our framework is based on the minimum error entropy criterion, which is not dependent on the error distribution and shown to be more robust to noise. In particular, MEEAR can produce discriminative representation vector by minimizing the atomic norm regularized Renyi's entropy of the reconstruction error. The optimality conditions are provided for general atomic representation model. As a general framework, MEEAR can also be used as a platform to develop new classifiers. Two effective MEE-based RCs are proposed by defining appropriate atomic sets. The experimental results on popular face databases show that MEEAR can improve both the recognition accuracy and the reconstructed results compared with the state-of-the-art MSE-based RCs.

  17. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition.

  18. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  19. Multi-class remote sensing object recognition based on discriminative sparse representation.

    PubMed

    Wang, Xin; Shen, Siqiu; Ning, Chen; Huang, Fengchen; Gao, Hongmin

    2016-02-20

    The automatic recognition of multi-class objects with various backgrounds is a big challenge in the field of remote sensing (RS) image analysis. In this paper, we propose a novel recognition framework for multi-class RS objects based on the discriminative sparse representation. In this framework, the recognition problem is implemented in two stages. In the first, or discriminative dictionary learning stage, considering the characterization of remote sensing objects, the scale-invariant feature transform descriptor is first combined with an improved bag-of-words model for multi-class objects feature extraction and representation. Then, information about each class of training samples is fused into the dictionary learning process; by using the K-singular value decomposition algorithm, a discriminative dictionary can be learned for sparse coding. In the second, or recognition, stage, to improve the computational efficiency, the phase spectrum of a quaternion Fourier transform model is applied to the test image to predict a small set of object candidate locations. Then, a multi-scale sliding window mechanism is utilized to scan the image over those candidate locations to obtain the object candidates (or objects of interest). Subsequently, the sparse coding coefficients of these candidates under the discriminative dictionary are mapped to the discriminative vectors that have a good ability to distinguish different classes of objects. Finally, multi-class object recognition can be accomplished by analyzing these vectors. The experimental results show that the proposed work outperforms a number of state-of-the-art methods for multi-class remote sensing object recognition. PMID:26906591

  20. Speech Emotion Recognition Based on Parametric Filter and Fractal Dimension

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Chen, Lijiang

    In this paper, we propose a new method that employs two novel features, correlation density (Cd) and fractal dimension (Fd), to recognize emotional states contained in speech. The former feature obtained by a list of parametric filters reflects the broad frequency components and the fine structure of lower frequency components, contributed by unvoiced phones and voiced phones, respectively; the latter feature indicates the non-linearity and self-similarity of a speech signal. Comparative experiments based on Hidden Markov Model and K Nearest Neighbor methods are carried out. The results show that Cd and Fd are much more closely related with emotional expression than the features commonly used.

  1. Defects' geometric feature recognition based on infrared image edge detection

    NASA Astrophysics Data System (ADS)

    Junyan, Liu; Qingju, Tang; Yang, Wang; Yumei, Lu; Zhiping, Zhang

    2014-11-01

    Edge detection is an important technology in image segmentation, feature extraction and other digital image processing areas. Boundary contains a wealth of information in the image, so to extract defects' edges in infrared images effectively enables the identification of defects' geometric features. This paper analyzed the detection effect of classic edge detection operators, and proposed fuzzy C-means (FCM) clustering-Canny operator algorithm to achieve defects' edges in the infrared images. Results show that the proposed algorithm has better effect than the classic edge detection operators, which can identify the defects' geometric feature much more completely and clearly. The defects' diameters have been calculated based on the image edge detection results.

  2. From neural-based object recognition toward microelectronic eyes

    NASA Technical Reports Server (NTRS)

    Sheu, Bing J.; Bang, Sa Hyun

    1994-01-01

    Engineering neural network systems are best known for their abilities to adapt to the changing characteristics of the surrounding environment by adjusting system parameter values during the learning process. Rapid advances in analog current-mode design techniques have made possible the implementation of major neural network functions in custom VLSI chips. An electrically programmable analog synapse cell with large dynamic range can be realized in a compact silicon area. New designs of the synapse cells, neurons, and analog processor are presented. A synapse cell based on Gilbert multiplier structure can perform the linear multiplication for back-propagation networks. A double differential-pair synapse cell can perform the Gaussian function for radial-basis network. The synapse cells can be biased in the strong inversion region for high-speed operation or biased in the subthreshold region for low-power operation. The voltage gain of the sigmoid-function neurons is externally adjustable which greatly facilitates the search of optimal solutions in certain networks. Various building blocks can be intelligently connected to form useful industrial applications. Efficient data communication is a key system-level design issue for large-scale networks. We also present analog neural processors based on perceptron architecture and Hopfield network for communication applications. Biologically inspired neural networks have played an important role towards the creation of powerful intelligent machines. Accuracy, limitations, and prospects of analog current-mode design of the biologically inspired vision processing chips and cellular neural network chips are key design issues.

  3. An fMRI comparison of neural activity associated with recognition of familiar melodies in younger and older adults.

    PubMed

    Sikka, Ritu; Cuddy, Lola L; Johnsrude, Ingrid S; Vanstone, Ashley D

    2015-01-01

    Several studies of semantic memory in non-musical domains involving recognition of items from long-term memory have shown an age-related shift from the medial temporal lobe structures to the frontal lobe. However, the effects of aging on musical semantic memory remain unexamined. We compared activation associated with recognition of familiar melodies in younger and older adults. Recognition follows successful retrieval from the musical lexicon that comprises a lifetime of learned musical phrases. We used the sparse-sampling technique in fMRI to determine the neural correlates of melody recognition by comparing activation when listening to familiar vs. unfamiliar melodies, and to identify age differences. Recognition-related cortical activation was detected in the right superior temporal, bilateral inferior and superior frontal, left middle orbitofrontal, bilateral precentral, and left supramarginal gyri. Region-of-interest analysis showed greater activation for younger adults in the left superior temporal gyrus and for older adults in the left superior frontal, left angular, and bilateral superior parietal regions. Our study provides powerful evidence for these musical memory networks due to a large sample (N = 40) that includes older adults. This study is the first to investigate the neural basis of melody recognition in older adults and to compare the findings to younger adults. PMID:26500480

  4. An fMRI comparison of neural activity associated with recognition of familiar melodies in younger and older adults

    PubMed Central

    Sikka, Ritu; Cuddy, Lola L.; Johnsrude, Ingrid S.; Vanstone, Ashley D.

    2015-01-01

    Several studies of semantic memory in non-musical domains involving recognition of items from long-term memory have shown an age-related shift from the medial temporal lobe structures to the frontal lobe. However, the effects of aging on musical semantic memory remain unexamined. We compared activation associated with recognition of familiar melodies in younger and older adults. Recognition follows successful retrieval from the musical lexicon that comprises a lifetime of learned musical phrases. We used the sparse-sampling technique in fMRI to determine the neural correlates of melody recognition by comparing activation when listening to familiar vs. unfamiliar melodies, and to identify age differences. Recognition-related cortical activation was detected in the right superior temporal, bilateral inferior and superior frontal, left middle orbitofrontal, bilateral precentral, and left supramarginal gyri. Region-of-interest analysis showed greater activation for younger adults in the left superior temporal gyrus and for older adults in the left superior frontal, left angular, and bilateral superior parietal regions. Our study provides powerful evidence for these musical memory networks due to a large sample (N = 40) that includes older adults. This study is the first to investigate the neural basis of melody recognition in older adults and to compare the findings to younger adults. PMID:26500480

  5. An fMRI comparison of neural activity associated with recognition of familiar melodies in younger and older adults.

    PubMed

    Sikka, Ritu; Cuddy, Lola L; Johnsrude, Ingrid S; Vanstone, Ashley D

    2015-01-01

    Several studies of semantic memory in non-musical domains involving recognition of items from long-term memory have shown an age-related shift from the medial temporal lobe structures to the frontal lobe. However, the effects of aging on musical semantic memory remain unexamined. We compared activation associated with recognition of familiar melodies in younger and older adults. Recognition follows successful retrieval from the musical lexicon that comprises a lifetime of learned musical phrases. We used the sparse-sampling technique in fMRI to determine the neural correlates of melody recognition by comparing activation when listening to familiar vs. unfamiliar melodies, and to identify age differences. Recognition-related cortical activation was detected in the right superior temporal, bilateral inferior and superior frontal, left middle orbitofrontal, bilateral precentral, and left supramarginal gyri. Region-of-interest analysis showed greater activation for younger adults in the left superior temporal gyrus and for older adults in the left superior frontal, left angular, and bilateral superior parietal regions. Our study provides powerful evidence for these musical memory networks due to a large sample (N = 40) that includes older adults. This study is the first to investigate the neural basis of melody recognition in older adults and to compare the findings to younger adults.

  6. [Automated recognition of quasars based on adaptive radial basis function neural networks].

    PubMed

    Zhao, Mei-Fang; Luo, A-Li; Wu, Fu-Chao; Hu, Zhan-Yi

    2006-02-01

    Recognizing and certifying quasars through the research on spectra is an important method in the field of astronomy. This paper presents a novel adaptive method for the automated recognition of quasars based on the radial basis function neural networks (RBFN). The proposed method is composed of the following three parts: (1) The feature space is reduced by the PCA (the principal component analysis) on the normalized input spectra; (2) An adaptive RBFN is constructed and trained in this reduced space. At first, the K-means clustering is used for the initialization, then based on the sum of squares errors and a gradient descent optimization technique, the number of neurons in the hidden layer is adaptively increased to improve the recognition performance; (3) The quasar spectra recognition is effectively carried out by the above trained RBFN. The author's proposed adaptive RBFN is shown to be able to not only overcome the difficulty of selecting the number of neurons in hidden layer of the traditional RBFN algorithm, but also increase the stability and accuracy of recognition of quasars. Besides, the proposed method is particularly useful for automatic voluminous spectra processing produced from a large-scale sky survey project, such as our LAMOST, due to its efficiency.

  7. Cold-Pressor Stress After Learning Enhances Familiarity-Based Recognition Memory in Men

    PubMed Central

    McCullough, Andrew M.; Yonelinas, Andrew P.

    2013-01-01

    Stress that is experienced after items have been encoded into memory can protect memories from the effects of forgetting. However, very little is known about how stress impacts recognition memory. The current study investigated how an aversive laboratory stressor (i.e., the cold-pressor test) that occurs after information has been encoded into memory affects subsequent recognition memory in an immediate and a delayed test (i.e., 2-hour and 3-month retention interval). Recognition was assessed for negative and neutral photographs using a hybrid remember/know confidence procedure in order to characterize overall performance and to separate recollection- and familiarity-based responses. The results indicated that relative to a non-stress control condition, post-encoding stress significantly improved familiarity but not recollection-based recognition memory or free recall. The beneficial effects of stress were observed in males for negative and neutral materials at both immediate and long-term delays, but were not significant in females. The results indicate that aversive stress can have long-lasting beneficial effects on the memory strength of information encountered prior to the stressful event. PMID:23823181

  8. Face Recognition Using Sparse Representation-Based Classification on K-Nearest Subspace

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing

    2013-01-01

    The sparse representation-based classification (SRC) has been proven to be a robust face recognition method. However, its computational complexity is very high due to solving a complex -minimization problem. To improve the calculation efficiency, we propose a novel face recognition method, called sparse representation-based classification on k-nearest subspace (SRC-KNS). Our method first exploits the distance between the test image and the subspace of each individual class to determine the nearest subspaces and then performs SRC on the selected classes. Actually, SRC-KNS is able to reduce the scale of the sparse representation problem greatly and the computation to determine the nearest subspaces is quite simple. Therefore, SRC-KNS has a much lower computational complexity than the original SRC. In order to well recognize the occluded face images, we propose the modular SRC-KNS. For this modular method, face images are partitioned into a number of blocks first and then we propose an indicator to remove the contaminated blocks and choose the nearest subspaces. Finally, SRC is used to classify the occluded test sample in the new feature space. Compared to the approach used in the original SRC work, our modular SRC-KNS can greatly reduce the computational load. A number of face recognition experiments show that our methods have five times speed-up at least compared to the original SRC, while achieving comparable or even better recognition rates. PMID:23555671

  9. Sensitivity based segmentation and identification in automatic speech recognition

    NASA Astrophysics Data System (ADS)

    Absher, R.

    1984-03-01

    This research program continued an investigation of sensitivity analysis, and its use in the segmentation and identification of the phonetic units of speech, that was initiated during the 1982 Summer Faculty Research Program. The elements of the sensitivity matrix, which express the relative change in each pole of the speech model to a relative change in each coefficient of the characteristic equation, were evaluated for an expanded set of data which consisted of six vowels contained in single words spoken in a simple carrier phrase by five males with differing dialects. The objectives were to evaluate the sensitivity matrix, interpret its changes during the production of the vowels, and to evaluate inter-speaker variations. It was determined that the sensitivity analysis (1) serves to segment the vowel interval, (2) provides a measure of when a vowel is on target, and (3) should provide sufficient information to identify each particular vowel. Based on the results presented, sensitivity analysis should result in more accurate segmentation and identification of phonemes and should provide a practicable framework for incorporation of acoustic-phonetic variance as well as time and talker normalization.

  10. Pattern Recognition-Based Approach for Identifying Metabolites in Nuclear Magnetic Resonance-Based Metabolomics.

    PubMed

    Dubey, Abhinav; Rangarajan, Annapoorni; Pal, Debnath; Atreya, Hanudatta S

    2015-07-21

    Identification and assignments of metabolites is an important step in metabolomics and is necessary for the discovery of new biomarkers. In nuclear magnetic resonance (NMR) spectroscopy-based studies, the conventional approach involves a database search, wherein chemical shifts are assigned to specific metabolites by use of a tolerance limit. This is inefficient because deviation in chemical shifts associated with pH or temperature variations, as well as missing peaks, impairs a robust comparison with the database. We propose here a novel method based on matching the pattern of peaks rather than absolute tolerance thresholds, using a combination of geometric hashing and similarity scoring techniques. Tests with 719 metabolites from the Human Metabolome Database (HMDB) show that 100% of the metabolites can be assigned correctly when accurate data are available. A high success rate is obtained even in the presence of large chemical shift deviations such as 0.5 ppm in (1)H and 3 ppm in (13)C and missing peaks (up to 50%), compared to nearly no assignments obtained under these conditions with existing methods that employ a direct database search approach. The method was evaluated on experimental data on a mixture of 16 metabolites at eight different combinations of pH and temperature conditions. The pattern recognition approach thus helps in identification and assignment of metabolites independent of the pH, temperature, and ionic strength used, thereby obviating the need for spectral calibration with internal or external standards.

  11. Evolutionary bases of carbohydrate recognition and substrate discrimination in the ROK protein family.

    PubMed

    Conejo, Maria S; Thompson, Steven M; Miller, Brian G

    2010-06-01

    The ROK (repressor, open reading frame, kinase) protein family (Pfam 00480) is a large collection of bacterial polypeptides that includes sugar kinases, carbohydrate responsive transcriptional repressors, and many functionally uncharacterized gene products. ROK family sugar kinases phosphorylate a range of structurally distinct hexoses including the key carbon source D: -glucose, various glucose epimers, and several acetylated hexosamines. The primary sequence elements responsible for carbohydrate recognition within different functional categories of ROK polypeptides are largely unknown due to a limited structural characterization of this protein family. In order to identify the structural bases for substrate discrimination in individual ROK proteins, and to better understand the evolutionary processes that led to the divergent evolution of function in this family, we constructed an inclusive alignment of 227 representative ROK polypeptides. Phylogenetic analyses and ancestral sequence reconstructions of the resulting tree reveal a discrete collection of active site residues that dictate substrate specificity. The results also suggest a series of mutational events within the carbohydrate-binding sites of ROK proteins that facilitated the expansion of substrate specificity within this family. This study provides new insight into the evolutionary relationship of ROK glucokinases and non-ROK glucokinases (Pfam 02685), revealing the primary sequence elements shared between these two protein families, which diverged from a common ancestor in ancient times. PMID:20512568

  12. Rho GTPase Recognition by C3 Exoenzyme Based on C3-RhoA Complex Structure.

    PubMed

    Toda, Akiyuki; Tsurumura, Toshiharu; Yoshida, Toru; Tsumori, Yayoi; Tsuge, Hideaki

    2015-08-01

    C3 exoenzyme is a mono-ADP-ribosyltransferase (ART) that catalyzes transfer of an ADP-ribose moiety from NAD(+) to Rho GTPases. C3 has long been used to study the diverse regulatory functions of Rho GTPases. How C3 recognizes its substrate and how ADP-ribosylation proceeds are still poorly understood. Crystal structures of C3-RhoA complex reveal that C3 recognizes RhoA via the switch I, switch II, and interswitch regions. In C3-RhoA(GTP) and C3-RhoA(GDP), switch I and II adopt the GDP and GTP conformations, respectively, which explains why C3 can ADP-ribosylate both nucleotide forms. Based on structural information, we successfully changed Cdc42 to an active substrate with combined mutations in the C3-Rho GTPase interface. Moreover, the structure reflects the close relationship among Gln-183 in the QXE motif (C3), a modified Asn-41 residue (RhoA) and NC1 of NAD(H), which suggests that C3 is the prototype ART. These structures show directly for the first time that the ARTT loop is the key to target protein recognition, and they also serve to bridge the gaps among independent studies of Rho GTPases and C3.

  13. EMD-Based Symbolic Dynamic Analysis for the Recognition of Human and Nonhuman Pyroelectric Infrared Signals

    PubMed Central

    Zhao, Jiaduo; Gong, Weiguo; Tang, Yuzhen; Li, Weihong

    2016-01-01

    In this paper, we propose an effective human and nonhuman pyroelectric infrared (PIR) signal recognition method to reduce PIR detector false alarms. First, using the mathematical model of the PIR detector, we analyze the physical characteristics of the human and nonhuman PIR signals; second, based on the analysis results, we propose an empirical mode decomposition (EMD)-based symbolic dynamic analysis method for the recognition of human and nonhuman PIR signals. In the proposed method, first, we extract the detailed features of a PIR signal into five symbol sequences using an EMD-based symbolization method, then, we generate five feature descriptors for each PIR signal through constructing five probabilistic finite state automata with the symbol sequences. Finally, we use a weighted voting classification strategy to classify the PIR signals with their feature descriptors. Comparative experiments show that the proposed method can effectively classify the human and nonhuman PIR signals and reduce PIR detector’s false alarms. PMID:26805837

  14. Protein−DNA binding in the absence of specific base-pair recognition

    PubMed Central

    Afek, Ariel; Schipper, Joshua L.; Horton, John; Gordân, Raluca; Lukatsky, David B.

    2014-01-01

    Until now, it has been reasonably assumed that specific base-pair recognition is the only mechanism controlling the specificity of transcription factor (TF)−DNA binding. Contrary to this assumption, here we show that nonspecific DNA sequences possessing certain repeat symmetries, when present outside of specific TF binding sites (TFBSs), statistically control TF−DNA binding preferences. We used high-throughput protein−DNA binding assays to measure the binding levels and free energies of binding for several human TFs to tens of thousands of short DNA sequences with varying repeat symmetries. Based on statistical mechanics modeling, we identify a new protein−DNA binding mechanism induced by DNA sequence symmetry in the absence of specific base-pair recognition, and experimentally demonstrate that this mechanism indeed governs protein−DNA binding preferences. PMID:25313048

  15. EMD-Based Symbolic Dynamic Analysis for the Recognition of Human and Nonhuman Pyroelectric Infrared Signals.

    PubMed

    Zhao, Jiaduo; Gong, Weiguo; Tang, Yuzhen; Li, Weihong

    2016-01-01

    In this paper, we propose an effective human and nonhuman pyroelectric infrared (PIR) signal recognition method to reduce PIR detector false alarms. First, using the mathematical model of the PIR detector, we analyze the physical characteristics of the human and nonhuman PIR signals; second, based on the analysis results, we propose an empirical mode decomposition (EMD)-based symbolic dynamic analysis method for the recognition of human and nonhuman PIR signals. In the proposed method, first, we extract the detailed features of a PIR signal into five symbol sequences using an EMD-based symbolization method, then, we generate five feature descriptors for each PIR signal through constructing five probabilistic finite state automata with the symbol sequences. Finally, we use a weighted voting classification strategy to classify the PIR signals with their feature descriptors. Comparative experiments show that the proposed method can effectively classify the human and nonhuman PIR signals and reduce PIR detector's false alarms. PMID:26805837

  16. Pose recognition of articulated target based on ladar range image with elastic shape analysis

    NASA Astrophysics Data System (ADS)

    Liu, Zheng-Jun; Li, Qi; Wang, Qi

    2014-10-01

    Elastic shape analysis is introduced for pose recognition of articulated target which is based on small samples of ladar range images. Shape deformations caused by poses changes represented as closed elastic curves given by the square-root velocity function geodesics are used to quantify shape differences and the Karcher mean is used to build a model library. Three kinds of moments - Hu moment invariants, affine moment invariants, and Zernike moment invariants based on support vector machines (SVMs) - are applied to evaluate this approach. The experiment results show that no matter what the azimuth angles of the testing samples are, this approach is capable of achieving a high recognition rate using only 3 model samples with different carrier to noise ratios (CNR); the performance of this approach is much better than that of three kinds of moments based on SVM, especially under high noise conditions.

  17. The activation of segmental and tonal information in visual word recognition.

    PubMed

    Li, Chuchu; Lin, Candise Y; Wang, Min; Jiang, Nan

    2013-08-01

    Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., , hong2, "red"), homophones of the color characters (S+T+; e.g., , hong2, "flood"), different-tone homophones (S+T-; e.g., , hong1, "boom"), characters that shared the same tone but differed in segments with the color characters (S-T+; e.g., , ping2, "bottle"), and neutral characters (S-T-; e.g., , qian1, "leading through"). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T- than for S-T+ trials, and was similar between S+T+ and S+T- trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account. PMID:23400856

  18. Soluble Collectin-12 (CL-12) Is a Pattern Recognition Molecule Initiating Complement Activation via the Alternative Pathway.

    PubMed

    Ma, Ying Jie; Hein, Estrid; Munthe-Fog, Lea; Skjoedt, Mikkel-Ole; Bayarri-Olmos, Rafael; Romani, Luigina; Garred, Peter

    2015-10-01

    Soluble defense collagens including the collectins play important roles in innate immunity. Recently, a new member of the collectin family named collectin-12 (CL-12 or CL-P1) has been identified. CL-12 is highly expressed in umbilical cord vascular endothelial cells as a transmembrane receptor and may recognize certain bacteria and fungi, leading to opsonophagocytosis. However, based on its structural and functional similarities with soluble collectins, we hypothesized the existence of a fluid-phase analog of CL-12 released from cells, which may function as a soluble pattern-recognition molecule. Using recombinant CL-12 full length or CL-12 extracellular domain, we determined the occurrence of soluble CL-12 shed from in vitro cultured cells. Western blot showed that soluble recombinant CL-12 migrated with a band corresponding to ∼ 120 kDa under reducing conditions, whereas under nonreducing conditions it presented multimeric assembly forms. Immunoprecipitation and Western blot analysis of human umbilical cord plasma enabled identification of a natural soluble form of CL-12 having an electrophoretic mobility pattern close to that of shed soluble recombinant CL-12. Soluble CL-12 could recognize Aspergillus fumigatus partially through the carbohydrate-recognition domain in a Ca(2+)-independent manner. This led to activation of the alternative pathway of complement exclusively via association with properdin on A. fumigatus as validated by detection of C3b deposition and formation of the terminal complement complex. These results demonstrate the existence of CL-12 in a soluble form and indicate a novel mechanism by which the alternative pathway of complement may be triggered directly by a soluble pattern-recognition molecule.

  19. Recognition of human activities using depth images of Kinect for biofied building

    NASA Astrophysics Data System (ADS)

    Ogawa, Ami; Mita, Akira

    2015-03-01

    These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.

  20. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  1. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression

    PubMed Central

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  2. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  3. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    PubMed

    Gao, Guangwei; Yang, Jian; Jing, Xiaoyuan; Huang, Pu; Hua, Juliang; Yue, Dong

    2016-01-01

    In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms. PMID:27525734

  4. Fast vision through frameless event-based sensing and convolutional processing: application to texture recognition.

    PubMed

    Perez-Carrasco, Jose Antonio; Acha, Begona; Serrano, Carmen; Camunas-Mesa, Luis; Serrano-Gotarredona, Teresa; Linares-Barranco, Bernabe

    2010-04-01

    Address-event representation (AER) is an emergent hardware technology which shows a high potential for providing in the near future a solid technological substrate for emulating brain-like processing structures. When used for vision, AER sensors and processors are not restricted to capturing and processing still image frames, as in commercial frame-based video technology, but sense and process visual information in a pixel-level event-based frameless manner. As a result, vision processing is practically simultaneous to vision sensing, since there is no need to wait for sensing full frames. Also, only meaningful information is sensed, communicated, and processed. Of special interest for brain-like vision processing are some already reported AER convolutional chips, which have revealed a very high computational throughput as well as the possibility of assembling large convolutional neural networks in a modular fashion. It is expected that in a near future we may witness the appearance of large scale convolutional neural networks with hundreds or thousands of individual modules. In the meantime, some research is needed to investigate how to assemble and configure such large scale convolutional networks for specific applications. In this paper, we analyze AER spiking convolutional neural networks for texture recognition hardware applications. Based on the performance figures of already available individual AER convolution chips, we emulate large scale networks using a custom made event-based behavioral simulator. We have developed a new event-based processing architecture that emulates with AER hardware Manjunath's frame-based feature recognition software algorithm, and have analyzed its performance using our behavioral simulator. Recognition rate performance is not degraded. However, regarding speed, we show that recognition can be achieved before an equivalent frame is fully sensed and transmitted.

  5. 3D face recognition based on multiple keypoint descriptors and sparse representation.

    PubMed

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  6. 3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation

    PubMed Central

    Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei

    2014-01-01

    Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876

  7. Anion recognition by simple chromogenic and chromo-fluorogenic salicylidene Schiff base or reduced-Schiff base receptors.

    PubMed

    Dalapati, Sasanka; Jana, Sankar; Guchhait, Nikhil

    2014-08-14

    This review contains extensive application of anion sensing ability of salicylidene type Schiff bases and their reduced forms having various substituents with respect to phenolic OH group. Some of these molecular systems behave as receptor for recognition or sensing of various anions in organic or aqueous-organic binary solvent mixture as well as in the solid supported test kits. Development of Schiff base or reduced Schiff base receptors for anion recognition event is commonly based on the theory of hydrogen bonding interaction or deprotonation of phenolic -OH group. The process of charge transfer (CT) or inhibition of excited proton transfer (ESIPT) or followed by photo-induced electron transfer (PET) lead to naked-eye color change, UV-vis spectral change, chemical shift in the NMR spectra and fluorescence spectral modifications. In this review we have tried to discuss about the anion sensing properties of Schiff base or reduced Schiff base receptors.

  8. Anion recognition by simple chromogenic and chromo-fluorogenic salicylidene Schiff base or reduced-Schiff base receptors

    NASA Astrophysics Data System (ADS)

    Dalapati, Sasanka; Jana, Sankar; Guchhait, Nikhil

    2014-08-01

    This review contains extensive application of anion sensing ability of salicylidene type Schiff bases and their reduced forms having various substituents with respect to phenolic sbnd OH group. Some of these molecular systems behave as receptor for recognition or sensing of various anions in organic or aqueous-organic binary solvent mixture as well as in the solid supported test kits. Development of Schiff base or reduced Schiff base receptors for anion recognition event is commonly based on the theory of hydrogen bonding interaction or deprotonation of phenolic -OH group. The process of charge transfer (CT) or inhibition of excited proton transfer (ESIPT) or followed by photo-induced electron transfer (PET) lead to naked-eye color change, UV-vis spectral change, chemical shift in the NMR spectra and fluorescence spectral modifications. In this review we have tried to discuss about the anion sensing properties of Schiff base or reduced Schiff base receptors.

  9. On the use of sensor fusion to reduce the impact of rotational and additive noise in human activity recognition.

    PubMed

    Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2012-01-01

    The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered.

  10. On the Use of Sensor Fusion to Reduce the Impact of Rotational and Additive Noise in Human Activity Recognition

    PubMed Central

    Banos, Oresti; Damas, Miguel; Pomares, Hector; Rojas, Ignacio

    2012-01-01

    The main objective of fusion mechanisms is to increase the individual reliability of the systems through the use of the collectivity knowledge. Moreover, fusion models are also intended to guarantee a certain level of robustness. This is particularly required for problems such as human activity recognition where runtime changes in the sensor setup seriously disturb the reliability of the initial deployed systems. For commonly used recognition systems based on inertial sensors, these changes are primarily characterized as sensor rotations, displacements or faults related to the batteries or calibration. In this work we show the robustness capabilities of a sensor-weighted fusion model when dealing with such disturbances under different circumstances. Using the proposed method, up to 60% outperformance is obtained when a minority of the sensors are artificially rotated or degraded, independent of the level of disturbance (noise) imposed. These robustness capabilities also apply for any number of sensors affected by a low to moderate noise level. The presented fusion mechanism compensates the poor performance that otherwise would be obtained when just a single sensor is considered. PMID:22969386

  11. Reading as Active Sensing: A Computational Model of Gaze Planning in Word Recognition

    PubMed Central

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting. PMID:20577589

  12. Reading as active sensing: a computational model of gaze planning in word recognition.

    PubMed

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    WE OFFER A COMPUTATIONAL MODEL OF GAZE PLANNING DURING READING THAT CONSISTS OF TWO MAIN COMPONENTS: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting.

  13. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation

    PubMed Central

    Jirayucharoensak, Suwicha; Pan-Ngum, Setha; Israsena, Pasin

    2014-01-01

    Automatic emotion recognition is one of the most challenging tasks. To detect emotion from nonstationary EEG signals, a sophisticated learning algorithm that can represent high-level abstraction is required. This study proposes the utilization of a deep learning network (DLN) to discover unknown feature correlation between input signals that is crucial for the learning task. The DLN is implemented with a stacked autoencoder (SAE) using hierarchical feature learning approach. Input features of the network are power spectral densities of 32-channel EEG signals from 32 subjects. To alleviate overfitting problem, principal component analysis (PCA) is applied to extract the most important components of initial input features. Furthermore, covariate shift adaptation of the principal components is implemented to minimize the nonstationary effect of EEG signals. Experimental results show that the DLN is capable of classifying three different levels of valence and arousal with accuracy of 49.52% and 46.03%, respectively. Principal component based covariate shift adaptation enhances the respective classification accuracy by 5.55% and 6.53%. Moreover, DLN provides better performance compared to SVM and naive Bayes classifiers. PMID:25258728

  14. A new approach for modulation recognition based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Shu; Wang, Hongyuan

    2007-11-01

    A New Approach based on ant colony algorithm for the automatic modulation recognition of communications signals is presented. This approach can discriminate between continuous wave (CW), Amplitude Modulation (AM), Frequency Modulation (FM), Frequency Shift Keying (FSK), Binary Phase Shift Keying (BPSK) and Quaternary Phase Shift Keying (QPSK) modulations. Requirements for a priori knowledge of the signals are minimized by the inclusion of an efficient carrier frequency estimator and low sensitivity to variations in the sampling epochs. Computer simulations indicate good performance on an AWGN channel, even at signal-to-noise ratios as low as 5 dB. This compares favorably with the performance obtained with most algorithms based on pattern recognition techniques.

  15. Automatic target recognition algorithm based on statistical dispersion of infrared multispectral image

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Cao, Le-lin; Wu, Chun-feng; Hou, Qing-yu

    2009-07-01

    A novel automatic target recognition algorithm based on statistical dispersion of infrared multispectral images(SDOIMI) is proposed. Firstly, infrared multispectral characteristic matrix of the scenario is constructed based on infrared multispectral characteristic information (such as radiation intensity and spectral distribution etc.) of targets, background and decoys. Then the infrared multispectral characteristic matrix of targets is reconstructed after segmenting image by maximum distance method and fusing spatial and spectral information. Finally, an statistical dispersion of infrared multispectral images(SDOIMI) recognition criteria is formulated in terms of spectral radiation difference of interesting targets. In simulation, nine sub-bands multispectral images of real ship target and shipborne aerosol infrared decoy modulated by laser simulating ship geometry appearance are obtained via using spectral radiation curves. Digital simulation experiment result verifies that the algorithm is effective and feasible.

  16. An improved poly(A) motifs recognition method based on decision level fusion.

    PubMed

    Zhang, Shanxin; Han, Jiuqiang; Liu, Jun; Zheng, Jiguang; Liu, Ruiling

    2015-02-01

    Polyadenylation is the process of addition of poly(A) tail to mRNA 3' ends. Identification of motifs controlling polyadenylation plays an essential role in improving genome annotation accuracy and better understanding of the mechanisms governing gene regulation. The bioinformatics methods used for poly(A) motifs recognition have demonstrated that information extracted from sequences surrounding the candidate motifs can differentiate true motifs from the false ones greatly. However, these methods depend on either domain features or string kernels. To date, methods combining information from different sources have not been found yet. Here, we proposed an improved poly(A) motifs recognition method by combing different sources based on decision level fusion. First of all, two novel prediction methods was proposed based on support vector machine (SVM): one method is achieved by using the domain-specific features and principle component analysis (PCA) method to eliminate the redundancy (PCA-SVM); the other method is based on Oligo string kernel (Oligo-SVM). Then we proposed a novel machine-learning method for poly(A) motif prediction by marrying four poly(A) motifs recognition methods, including two state-of-the-art methods (Random Forest (RF) and HMM-SVM), and two novel proposed methods (PCA-SVM and Oligo-SVM). A decision level information fusion method was employed to combine the decision values of different classifiers by applying the DS evidence theory. We evaluated our method on a comprehensive poly(A) dataset that consists of 14,740 samples on 12 variants of poly(A) motifs and 2750 samples containing none of these motifs. Our method has achieved accuracy up to 86.13%. Compared with the four classifiers, our evidence theory based method reduces the average error rate by about 30%, 27%, 26% and 16%, respectively. The experimental results suggest that the proposed method is more effective for poly(A) motif recognition. PMID:25594576

  17. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    PubMed

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-01-01

    Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the

  18. An improved poly(A) motifs recognition method based on decision level fusion.

    PubMed

    Zhang, Shanxin; Han, Jiuqiang; Liu, Jun; Zheng, Jiguang; Liu, Ruiling

    2015-02-01

    Polyadenylation is the process of addition of poly(A) tail to mRNA 3' ends. Identification of motifs controlling polyadenylation plays an essential role in improving genome annotation accuracy and better understanding of the mechanisms governing gene regulation. The bioinformatics methods used for poly(A) motifs recognition have demonstrated that information extracted from sequences surrounding the candidate motifs can differentiate true motifs from the false ones greatly. However, these methods depend on either domain features or string kernels. To date, methods combining information from different sources have not been found yet. Here, we proposed an improved poly(A) motifs recognition method by combing different sources based on decision level fusion. First of all, two novel prediction methods was proposed based on support vector machine (SVM): one method is achieved by using the domain-specific features and principle component analysis (PCA) method to eliminate the redundancy (PCA-SVM); the other method is based on Oligo string kernel (Oligo-SVM). Then we proposed a novel machine-learning method for poly(A) motif prediction by marrying four poly(A) motifs recognition methods, including two state-of-the-art methods (Random Forest (RF) and HMM-SVM), and two novel proposed methods (PCA-SVM and Oligo-SVM). A decision level information fusion method was employed to combine the decision values of different classifiers by applying the DS evidence theory. We evaluated our method on a comprehensive poly(A) dataset that consists of 14,740 samples on 12 variants of poly(A) motifs and 2750 samples containing none of these motifs. Our method has achieved accuracy up to 86.13%. Compared with the four classifiers, our evidence theory based method reduces the average error rate by about 30%, 27%, 26% and 16%, respectively. The experimental results suggest that the proposed method is more effective for poly(A) motif recognition.

  19. Polymer-based separations: Synthesis and application of polymers for ionic and molecular recognition

    SciTech Connect

    Alexandratos, S.D.

    1992-01-01

    Polymer-based separations have utilized resins such as sulfonic, acrylic, and iminodiacetic acid resins and the XAD series. Selective polymeric reagents for reaction with a targeted metal ion were synthesized as polymers with two different types of functional groups, each operating on the ions through a different mechanism. There are 3 classes of DMBPs (dual mechanism bifunctional polymers). Research during this period dealing with metal ion recognition focused on two of these classes (reduction of metal ions to metal; selective complexation).

  20. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    PubMed

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-01-01

    Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the