Sample records for iris recognition algorithm

  1. An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM

    NASA Astrophysics Data System (ADS)

    Wang, Juan

    2018-03-01

    The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.

  2. The fast iris image clarity evaluation based on Tenengrad and ROI selection

    NASA Astrophysics Data System (ADS)

    Gao, Shuqin; Han, Min; Cheng, Xu

    2018-04-01

    In iris recognition system, the clarity of iris image is an important factor that influences recognition effect. In the process of recognition, the blurred image may possibly be rejected by the automatic iris recognition system, which will lead to the failure of identification. Therefore it is necessary to evaluate the iris image definition before recognition. Considered the existing evaluation methods on iris image definition, we proposed a fast algorithm to evaluate the definition of iris image in this paper. In our algorithm, firstly ROI (Region of Interest) is extracted based on the reference point which is determined by using the feature of the light spots within the pupil, then Tenengrad operator is used to evaluate the iris image's definition. Experiment results show that, the iris image definition algorithm proposed in this paper could accurately distinguish the iris images of different clarity, and the algorithm has the merit of low computational complexity and more effectiveness.

  3. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  4. An effective approach for iris recognition using phase-based image matching.

    PubMed

    Miyazawa, Kazuyuki; Ito, Koichi; Aoki, Takafumi; Kobayashi, Koji; Nakajima, Hiroshi

    2008-10-01

    This paper presents an efficient algorithm for iris recognition using phase-based image matching--an image matching technique using phase components in 2D Discrete Fourier Transforms (DFTs) of given images. Experimental evaluation using CASIA iris image databases (versions 1.0 and 2.0) and Iris Challenge Evaluation (ICE) 2005 database clearly demonstrates that the use of phase components of iris images makes possible to achieve highly accurate iris recognition with a simple matching algorithm. This paper also discusses major implementation issues of our algorithm. In order to reduce the size of iris data and to prevent the visibility of iris images, we introduce the idea of 2D Fourier Phase Code (FPC) for representing iris information. The 2D FPC is particularly useful for implementing compact iris recognition devices using state-of-the-art Digital Signal Processing (DSP) technology.

  5. Improving iris recognition performance using segmentation, quality enhancement, match score fusion, and indexing.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2008-08-01

    This paper proposes algorithms for iris segmentation, quality enhancement, match score fusion, and indexing to improve both the accuracy and the speed of iris recognition. A curve evolution approach is proposed to effectively segment a nonideal iris image using the modified Mumford-Shah functional. Different enhancement algorithms are concurrently applied on the segmented iris image to produce multiple enhanced versions of the iris image. A support-vector-machine-based learning algorithm selects locally enhanced regions from each globally enhanced image and combines these good-quality regions to create a single high-quality iris image. Two distinct features are extracted from the high-quality iris image. The global textural feature is extracted using the 1-D log polar Gabor transform, and the local topological feature is extracted using Euler numbers. An intelligent fusion algorithm combines the textural and topological matching scores to further improve the iris recognition performance and reduce the false rejection rate, whereas an indexing algorithm enables fast and accurate iris identification. The verification and identification performance of the proposed algorithms is validated and compared with other algorithms using the CASIA Version 3, ICE 2005, and UBIRIS iris databases.

  6. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  7. An automatic iris occlusion estimation method based on high-dimensional density estimation.

    PubMed

    Li, Yung-Hui; Savvides, Marios

    2013-04-01

    Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.

  8. Multispectral iris recognition based on group selection and game theory

    NASA Astrophysics Data System (ADS)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  9. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    NASA Astrophysics Data System (ADS)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  10. An iris recognition algorithm based on DCT and GLCM

    NASA Astrophysics Data System (ADS)

    Feng, G.; Wu, Ye-qing

    2008-04-01

    With the enlargement of mankind's activity range, the significance for person's status identity is becoming more and more important. So many different techniques for person's status identity were proposed for this practical usage. Conventional person's status identity methods like password and identification card are not always reliable. A wide variety of biometrics has been developed for this challenge. Among those biologic characteristics, iris pattern gains increasing attention for its stability, reliability, uniqueness, noninvasiveness and difficult to counterfeit. The distinct merits of the iris lead to its high reliability for personal identification. So the iris identification technique had become hot research point in the past several years. This paper presents an efficient algorithm for iris recognition using gray-level co-occurrence matrix(GLCM) and Discrete Cosine transform(DCT). To obtain more representative iris features, features from space and DCT transformation domain are extracted. Both GLCM and DCT are applied on the iris image to form the feature sequence in this paper. The combination of GLCM and DCT makes the iris feature more distinct. Upon GLCM and DCT the eigenvector of iris extracted, which reflects features of spatial transformation and frequency transformation. Experimental results show that the algorithm is effective and feasible with iris recognition.

  11. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  12. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  13. Iris Location Algorithm Based on the CANNY Operator and Gradient Hough Transform

    NASA Astrophysics Data System (ADS)

    Zhong, L. H.; Meng, K.; Wang, Y.; Dai, Z. Q.; Li, S.

    2017-12-01

    In the iris recognition system, the accuracy of the localization of the inner and outer edges of the iris directly affects the performance of the recognition system, so iris localization has important research meaning. Our iris data contain eyelid, eyelashes, light spot and other noise, even the gray transformation of the images is not obvious, so the general methods of iris location are unable to realize the iris location. The method of the iris location based on Canny operator and gradient Hough transform is proposed. Firstly, the images are pre-processed; then, calculating the gradient information of images, the inner and outer edges of iris are coarse positioned using Canny operator; finally, according to the gradient Hough transform to realize precise localization of the inner and outer edge of iris. The experimental results show that our algorithm can achieve the localization of the inner and outer edges of the iris well, and the algorithm has strong anti-interference ability, can greatly reduce the location time and has higher accuracy and stability.

  14. Modeling IrisCode and its variants as convex polyhedral cones and its security implications.

    PubMed

    Kong, Adams Wai-Kin

    2013-03-01

    IrisCode, developed by Daugman, in 1993, is the most influential iris recognition algorithm. A thorough understanding of IrisCode is essential, because over 100 million persons have been enrolled by this algorithm and many biometric personal identification and template protection methods have been developed based on IrisCode. This paper indicates that a template produced by IrisCode or its variants is a convex polyhedral cone in a hyperspace. Its central ray, being a rough representation of the original biometric signal, can be computed by a simple algorithm, which can often be implemented in one Matlab command line. The central ray is an expected ray and also an optimal ray of an objective function on a group of distributions. This algorithm is derived from geometric properties of a convex polyhedral cone but does not rely on any prior knowledge (e.g., iris images). The experimental results show that biometric templates, including iris and palmprint templates, produced by different recognition methods can be matched through the central rays in their convex polyhedral cones and that templates protected by a method extended from IrisCode can be broken into. These experimental results indicate that, without a thorough security analysis, convex polyhedral cone templates cannot be assumed secure. Additionally, the simplicity of the algorithm implies that even junior hackers without knowledge of advanced image processing and biometric databases can still break into protected templates and reveal relationships among templates produced by different recognition methods.

  15. A novel iris localization algorithm using correlation filtering

    NASA Astrophysics Data System (ADS)

    Pohit, Mausumi; Sharma, Jitu

    2015-06-01

    Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.

  16. An Iris Segmentation Algorithm based on Edge Orientation for Off-angle Iris Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is known as one of the most accurate and reliable biometrics. However, the accuracy of iris recognition systems depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. In this paper, we present a segmentation algorithm for off-angle iris images that uses edge detection, edge elimination, edge classification, and ellipse fitting techniques. In our approach, we first detect all candidate edges in the iris image by using the canny edge detector; this collection contains edges from the iris and pupil boundaries as well as eyelash, eyelids, iris texturemore » etc. Edge orientation is used to eliminate the edges that cannot be part of the iris or pupil. Then, we classify the remaining edge points into two sets as pupil edges and iris edges. Finally, we randomly generate subsets of iris and pupil edge points, fit ellipses for each subset, select ellipses with similar parameters, and average to form the resultant ellipses. Based on the results from real experiments, the proposed method shows effectiveness in segmentation for off-angle iris images.« less

  17. Secondary iris recognition method based on local energy-orientation feature

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing

    2015-01-01

    This paper proposes a secondary iris recognition based on local features. The application of the energy-orientation feature (EOF) by two-dimensional Gabor filter to the extraction of the iris goes before the first recognition by the threshold of similarity, which sets the whole iris database into two categories-a correctly recognized class and a class to be recognized. Therefore, the former are accepted and the latter are transformed by histogram to achieve an energy-orientation histogram feature (EOHF), which is followed by a second recognition with the chi-square distance. The experiment has proved that the proposed method, because of its higher correct recognition rate, could be designated as the most efficient and effective among its companion studies in iris recognition algorithms.

  18. Iris double recognition based on modified evolutionary neural network

    NASA Astrophysics Data System (ADS)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  19. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors

    PubMed Central

    Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung

    2018-01-01

    The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets. PMID:29748495

  20. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors.

    PubMed

    Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung

    2018-05-10

    The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

  1. Iris Matching Based on Personalized Weight Map.

    PubMed

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu

    2011-09-01

    Iris recognition typically involves three steps, namely, iris image preprocessing, feature extraction, and feature matching. The first two steps of iris recognition have been well studied, but the last step is less addressed. Each human iris has its unique visual pattern and local image features also vary from region to region, which leads to significant differences in robustness and distinctiveness among the feature codes derived from different iris regions. However, most state-of-the-art iris recognition methods use a uniform matching strategy, where features extracted from different regions of the same person or the same region for different individuals are considered to be equally important. This paper proposes a personalized iris matching strategy using a class-specific weight map learned from the training images of the same iris class. The weight map can be updated online during the iris recognition procedure when the successfully recognized iris images are regarded as the new training data. The weight map reflects the robustness of an encoding algorithm on different iris regions by assigning an appropriate weight to each feature code for iris matching. Such a weight map trained by sufficient iris templates is convergent and robust against various noise. Extensive and comprehensive experiments demonstrate that the proposed personalized iris matching strategy achieves much better iris recognition performance than uniform strategies, especially for poor quality iris images.

  2. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    PubMed

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  3. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

    PubMed Central

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431

  4. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  5. Towards online iris and periocular recognition under relaxed imaging constraints.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2013-10-01

    Online iris recognition using distantly acquired images in a less imaging constrained environment requires the development of a efficient iris segmentation approach and recognition strategy that can exploit multiple features available for the potential identification. This paper presents an effective solution toward addressing such a problem. The developed iris segmentation approach exploits a random walker algorithm to efficiently estimate coarsely segmented iris images. These coarsely segmented iris images are postprocessed using a sequence of operations that can effectively improve the segmentation accuracy. The robustness of the proposed iris segmentation approach is ascertained by providing comparison with other state-of-the-art algorithms using publicly available UBIRIS.v2, FRGC, and CASIA.v4-distance databases. Our experimental results achieve improvement of 9.5%, 4.3%, and 25.7% in the average segmentation accuracy, respectively, for the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with most competing approaches. We also exploit the simultaneously extracted periocular features to achieve significant performance improvement. The joint segmentation and combination strategy suggest promising results and achieve average improvement of 132.3%, 7.45%, and 17.5% in the recognition performance, respectively, from the UBIRIS.v2, FRGC, and CASIA.v4-distance databases, as compared with the related competing approaches.

  6. Iris Recognition: The Consequences of Image Compression

    NASA Astrophysics Data System (ADS)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  7. Efficient iris recognition by characterizing key local variations.

    PubMed

    Ma, Li; Tan, Tieniu; Wang, Yunhong; Zhang, Dexin

    2004-06-01

    Unlike other biometrics such as fingerprints and face, the distinct aspect of iris comes from randomly distributed features. This leads to its high reliability for personal identification, and at the same time, the difficulty in effectively representing such details in an image. This paper describes an efficient algorithm for iris recognition by characterizing key local variations. The basic idea is that local sharp variation points, denoting the appearing or vanishing of an important image structure, are utilized to represent the characteristics of the iris. The whole procedure of feature extraction includes two steps: 1) a set of one-dimensional intensity signals is constructed to effectively characterize the most important information of the original two-dimensional image; 2) using a particular class of wavelets, a position sequence of local sharp variation points in such signals is recorded as features. We also present a fast matching scheme based on exclusive OR operation to compute the similarity between a pair of position sequences. Experimental results on 2255 iris images show that the performance of the proposed method is encouraging and comparable to the best iris recognition algorithm found in the current literature.

  8. Improved Iris Recognition through Fusion of Hamming Distance and Fragile Bit Distance.

    PubMed

    Hollingsworth, Karen P; Bowyer, Kevin W; Flynn, Patrick J

    2011-12-01

    The most common iris biometric algorithm represents the texture of an iris using a binary iris code. Not all bits in an iris code are equally consistent. A bit is deemed fragile if its value changes across iris codes created from different images of the same iris. Previous research has shown that iris recognition performance can be improved by masking these fragile bits. Rather than ignoring fragile bits completely, we consider what beneficial information can be obtained from the fragile bits. We find that the locations of fragile bits tend to be consistent across different iris codes of the same eye. We present a metric, called the fragile bit distance, which quantitatively measures the coincidence of the fragile bit patterns in two iris codes. We find that score fusion of fragile bit distance and Hamming distance works better for recognition than Hamming distance alone. To our knowledge, this is the first and only work to use the coincidence of fragile bit locations to improve the accuracy of matches.

  9. Comparing an FPGA to a Cell for an Image Processing Application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Barstow, Del R; Karakaya, Mahmut

    Iris recognition has been proven to be an accurate and reliable biometric. However, the recognition of non-ideal iris images such as off angle images is still an unsolved problem. We propose a new biometric targeted eye model and a method to reconstruct the off-axis eye to its frontal view allowing for recognition using existing methods and algorithms. This allows for existing enterprise level algorithms and approaches to be largely unmodified by using our work as a pre-processor to improve performance. In addition, we describe the `Limbus effect' and its importance for an accurate segmentation of off-axis irides. Our method usesmore » an anatomically accurate human eye model and ray-tracing techniques to compute a transformation function, which reconstructs the iris to its frontal, non-refracted state. Then, the same eye model is used to render a frontal view of the reconstructed iris. The proposed method is fully described and results from synthetic data are shown to establish an upper limit on performance improvement and establish the importance of the proposed approach over traditional linear elliptical unwrapping methods. Our results with synthetic data demonstrate the ability to perform an accurate iris recognition with an image taken as much as 70 degrees off-axis.« less

  11. Edge detection techniques for iris recognition system

    NASA Astrophysics Data System (ADS)

    Tania, U. T.; Motakabber, S. M. A.; Ibrahimy, M. I.

    2013-12-01

    Nowadays security and authentication are the major parts of our daily life. Iris is one of the most reliable organ or part of human body which can be used for identification and authentication purpose. To develop an iris authentication algorithm for personal identification, this paper examines two edge detection techniques for iris recognition system. Between the Sobel and the Canny edge detection techniques, the experimental result shows that the Canny's technique has better ability to detect points in a digital image where image gray level changes even at slow rate.

  12. Multiple local feature representations and their fusion based on an SVR model for iris recognition using optimized Gabor filters

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Dong, Hongxing

    2014-12-01

    Gabor descriptors have been widely used in iris texture representations. However, fixed basic Gabor functions cannot match the changing nature of diverse iris datasets. Furthermore, a single form of iris feature cannot overcome difficulties in iris recognition, such as illumination variations, environmental conditions, and device variations. This paper provides multiple local feature representations and their fusion scheme based on a support vector regression (SVR) model for iris recognition using optimized Gabor filters. In our iris system, a particle swarm optimization (PSO)- and a Boolean particle swarm optimization (BPSO)-based algorithm is proposed to provide suitable Gabor filters for each involved test dataset without predefinition or manual modulation. Several comparative experiments on JLUBR-IRIS, CASIA-I, and CASIA-V4-Interval iris datasets are conducted, and the results show that our work can generate improved local Gabor features by using optimized Gabor filters for each dataset. In addition, our SVR fusion strategy may make full use of their discriminative ability to improve accuracy and reliability. Other comparative experiments show that our approach may outperform other popular iris systems.

  13. On techniques for angle compensation in nonideal iris recognition.

    PubMed

    Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A

    2007-10-01

    The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.

  14. Surface ablation with iris recognition and dynamic rotational eye tracking-based tissue saving treatment with the Technolas 217z excimer laser.

    PubMed

    Prakash, Gaurav; Agarwal, Amar; Kumar, Dhivya Ashok; Jacob, Soosan; Agarwal, Athiya; Maity, Amrita

    2011-03-01

    To evaluate the visual and refractive outcomes and expected benefits of Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking. This prospective, interventional case series comprised 122 eyes (70 patients). Pre- and postoperative assessment included uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), refraction, and higher order aberrations. All patients underwent Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking using the Technolas 217z 100-Hz excimer platform (Technolas Perfect Vision GmbH). Follow-up was performed up to 6 months postoperatively. Theoretical benefit analysis was performed to evaluate the algorithm's outcomes compared to others. Preoperative spherocylindrical power was sphere -3.62 ± 1.60 diopters (D) (range: 0 to -6.75 D), cylinder -1.15 ± 1.00 D (range: 0 to -3.50 D), and spherical equivalent -4.19 ± 1.60 D (range: -7.75 to -2.00 D). At 6 months, 91% (111/122) of eyes were within ± 0.50 D of attempted correction. Postoperative UDVA was comparable to preoperative CDVA at 1 month (P=.47) and progressively improved at 6 months (P=.004). Two eyes lost one line of CDVA at 6 months. Theoretical benefit analysis revealed that of 101 eyes with astigmatism, 29 would have had cyclotorsion-induced astigmatism of ≥ 10% if iris recognition and dynamic rotational eye tracking were not used. Furthermore, the mean percentage decrease in maximum depth of ablation by using the Tissue Saving Treatment was 11.8 ± 2.9% over Aspheric, 17.8 ± 6.2% over Personalized, and 18.2 ± 2.8% over Planoscan algorithms. Tissue saving surface ablation with iris recognition and dynamic rotational eye tracking was safe and effective in this series of eyes. Copyright 2011, SLACK Incorporated.

  15. Iris recognition using image moments and k-means algorithm.

    PubMed

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%.

  16. Iris Recognition Using Image Moments and k-Means Algorithm

    PubMed Central

    Khan, Yaser Daanial; Khan, Sher Afzal; Ahmad, Farooq; Islam, Saeed

    2014-01-01

    This paper presents a biometric technique for identification of a person using the iris image. The iris is first segmented from the acquired image of an eye using an edge detection algorithm. The disk shaped area of the iris is transformed into a rectangular form. Described moments are extracted from the grayscale image which yields a feature vector containing scale, rotation, and translation invariant moments. Images are clustered using the k-means algorithm and centroids for each cluster are computed. An arbitrary image is assumed to belong to the cluster whose centroid is the nearest to the feature vector in terms of Euclidean distance computed. The described model exhibits an accuracy of 98.5%. PMID:24977221

  17. A novel iris patterns matching algorithm of weighted polar frequency correlation

    NASA Astrophysics Data System (ADS)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  18. Off-Angle Iris Correction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Thompson, Joseph T; Karakaya, Mahmut

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not accountmore » for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.« less

  19. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    PubMed Central

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  20. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  1. Frontal view reconstruction for iris recognition

    DOEpatents

    Santos-Villalobos, Hector J; Bolme, David S; Boehnen, Chris Bensing

    2015-02-17

    Iris recognition can be accomplished for a wide variety of eye images by correcting input images with an off-angle gaze. A variety of techniques, from limbus modeling, corneal refraction modeling, optical flows, and genetic algorithms can be used. A variety of techniques, including aspherical eye modeling, corneal refraction modeling, ray tracing, and the like can be employed. Precomputed transforms can enhance performance for use in commercial applications. With application of the technologies, images with significantly unfavorable gaze angles can be successfully recognized.

  2. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  3. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  4. Unified framework for automated iris segmentation using distantly acquired face images.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  5. Toward accurate and fast iris segmentation for iris biometrics.

    PubMed

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.

  6. Accurate Iris Recognition at a Distance Using Stabilized Iris Encoding and Zernike Moments Phase Features.

    PubMed

    Tan, Chun-Wei; Kumar, Ajay

    2014-07-10

    Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.

  7. Predictive factor analysis for successful performance of iris recognition-assisted dynamic rotational eye tracking during laser in situ keratomileusis.

    PubMed

    Prakash, Gaurav; Ashok Kumar, Dhivya; Agarwal, Amar; Jacob, Soosan; Sarvanan, Yoga; Agarwal, Athiya

    2010-02-01

    To analyze the predictive factors associated with success of iris recognition and dynamic rotational eye tracking on a laser in situ keratomileusis (LASIK) platform with active assessment and correction of intraoperative cyclotorsion. Interventional case series. Two hundred seventy-five eyes of 142 consecutive candidates underwent LASIK with attempted iris recognition and dynamic rotational tracking on the Technolas 217z100 platform (Techolas Perfect Vision, St Louis, Missouri, USA) at a tertiary care ophthalmic hospital. The main outcome measures were age, gender, flap creation method (femtosecond, microkeratome, epi-LASIK), success of static rotational tracking, ablation algorithm, pulses, and depth; preablation and intraablation rotational activity were analyzed and evaluated using regression models. Preablation static iris recognition was successful in 247 eyes, without difference in flap creation methods (P = .6). Age (partial correlation, -0.16; P = .014), amount of pulses (partial correlation, 0.39; P = 1.6 x 10(-8)), and gender (P = .02) were significant predictive factors for the amount of intraoperative cyclodeviation. Tracking difficulties leading to linking the ablation with a new intraoperatively acquired iris image were more with femtosecond-assisted flaps (P = 2.8 x 10(-7)) and the amount of intraoperative cyclotorsion (P = .02). However, the number of cases having nonresolvable failure of intraoperative rotational tracking was similar in the 3 flap creation methods (P = .22). Intraoperative cyclotorsional activity depends on the age, gender, and duration of ablation (pulses delivered). Femtosecond flaps do not seem to have a disadvantage over microkeratome flaps as far as iris recognition and success of intraoperative dynamic rotational tracking is concerned. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  8. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    NASA Astrophysics Data System (ADS)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  9. Iris biometric system design using multispectral imaging

    NASA Astrophysics Data System (ADS)

    Widhianto, Benedictus Yohanes Bagus Y. B.; Nasution, Aulia M. T.

    2016-11-01

    An identity recognition system is a vital component that cannot be separated from life, iris biometric is one of the biometric that has the best accuracy reaching 99%. Usually, iris biometric systems use infrared spectrum lighting to reduce discomfort caused by radiation when the eye is given direct light, while the eumelamin that is forming the iris has the most flourescent radiation when given a spectrum of visible light. This research will be conducted by detecting iris wavelengths of 850 nm, 560 nm, and 590 nm, where the detection algorithm will be using Daugman algorithm by using a Gabor wavelet extraction feature, and matching feature using a Hamming distance. Results generated will be analyzed to identify how much differences there are, and to improve the accuracy of the multispectral biometric system and as a detector of the authenticity of the iris. The results obtained from the analysis of wavelengths 850 nm, 560 nm, and 590 nm respectively has an accuracy of 99,35 , 97,5 , 64,5 with a matching score of 0,26 , 0,23 , 0,37.

  10. Integrating image quality in 2nu-SVM biometric match score fusion.

    PubMed

    Vatsa, Mayank; Singh, Richa; Noore, Afzel

    2007-10-01

    This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.

  11. Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung

    2010-06-01

    Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.

  12. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    PubMed Central

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  13. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    PubMed

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  14. Use of iris recognition camera technology for the quantification of corneal opacification in mucopolysaccharidoses.

    PubMed

    Aslam, Tariq Mehmood; Shakir, Savana; Wong, James; Au, Leon; Ashworth, Jane

    2012-12-01

    Mucopolysaccharidoses (MPS) can cause corneal opacification that is currently difficult to objectively quantify. With newer treatments for MPS comes an increased need for a more objective, valid and reliable index of disease severity for clinical and research use. Clinical evaluation by slit lamp is very subjective and techniques based on colour photography are difficult to standardise. In this article the authors present evidence for the utility of dedicated image analysis algorithms applied to images obtained by a highly sophisticated iris recognition camera that is small, manoeuvrable and adapted to achieve rapid, reliable and standardised objective imaging in a wide variety of patients while minimising artefactual interference in image quality.

  15. [Comparative clinical study of wavefront-guided laser in situ keratomileusis with versus without iris recognition for myopia or myopic astigmatism].

    PubMed

    Wang, Wei-qun; Zhang, Jin-song; Zhao, Xiao-jin

    2011-10-01

    To explore the postoperative visual acuity results of wavefront-guided LASIK with iris recognition for myopia or myopic astigmatism and the changes of higher-order aberrations and contrast sensitivity function (CSF). Series of prospective case studies, 158 eyes (85 cases) of myopia or myopic astigmatism were divided into two groups: one group underwent wavefront-guided LASIK with iris recognition (iris recognition group); another group underwent wavefront-guided LASIK treatment without iris recognition through the limbus maring point (non-iris recognition group). To comparative analyze the postoperative visual acuity, residual refraction, the RMS of higher-order aberrations and CSF of two groups. There was no statistical significance difference between two groups of the average uncorrected visual acuity (t = 0.039, 0.058, 0.898; P = 0.844, 0.810, 0.343), best corrected visual acuity (t = 0.320, 0.440, 1.515; P = 0.572, 0.507, 0.218), and residual refraction [spherical equivalent (t = 0.027, 0.215, 0.238; P = 0.869, 0.643, 0.626), spherical (t = 0.145, 0.117, 0.038; P = 0.704, 0.732, 0.845) and cylinder (t = 1.676, 1.936, 0.334; P = 0.195, 0.164, 0.563)] at postoperative 10 days, 1 month and 3 month. The security index of iris recognition group at postoperative 3 month was 1.06 and non-iris recognition group was 1.03; the efficacy index of iris recognition group is 1.01 and non-iris recognition group was 1.00. Postoperative 3 month iris recognition group 93.83% eyes and non-iris recognition group of 90.91% eyes spherical equivalent within ± 0.50 D (χ(2) = 0.479, P = 0.489), iris recognition group of 98.77% eyes and non-iris recognition group of 97.40% eyes spherical equivalent within ± 1.00 D (Fisher test, P = 0.613). There was no significance difference between the two groups of security, efficacy and predictability. Non-iris recognition group postoperative 1 month and postoperative 3 months 3-order order aberrations root mean square value (RMS) higher than the iris recognition group increased (t = 3.414, -2.870; P = 0.027, 0.045), in particular of coma; the general higher-order aberrations (t = 0.386, 1.132; P = 0.719, 0.321), 4-order aberrations (t = 0.808, 2.720; P = 0.464, 0.063), and 5-order aberrations (t = 0.148, -1.717; P = 0.890, 0.161) show no statistically significant difference. Three months after surgery, two groups have recovered at all spatial frequencies of CSF, iris recognition group at 3.0 c/d (t = 3.209, P = 0.002) and 6.0 c/d (t = 2.997, P = 0.004) spatial frequencies of CSF under mesopic condition was better than non-iris recognition group, glare contrast sensitivity function (GCSF) for 3.0 c/d (t = 3.423, P = 0.001) and 6.0 c/d (t = 6.986, P = 0.000) spatial frequencies under mesopic condition and 1.5 c/d (t = 9.839, P = 0.000) and 3.0 c/d (t = 7.367, P = 0.000) spatial frequencies under photopic condition in iris recognition group were better than non-iris recognition group, there were no significant difference between two groups at the other spatial frequencies. Wavefront-guided LASIK with or without iris recognition both acquired better postoperative visual acuity, but in comparison with without iris recognition, wavefront-guided LASIK with iris recognition is efficient to reduce coma and enhance contrast sensitivity of postoperative.

  16. Iris recognition in the presence of ocular disease

    PubMed Central

    Aslam, Tariq Mehmood; Tan, Shi Zhuan; Dhillon, Baljean

    2009-01-01

    Iris recognition systems are among the most accurate of all biometric technologies with immense potential for use in worldwide security applications. This study examined the effect of eye pathology on iris recognition and in particular whether eye disease could cause iris recognition systems to fail. The experiment involved a prospective cohort of 54 patients with anterior segment eye disease who were seen at the acute referral unit of the Princess Alexandra Eye Pavilion in Edinburgh. Iris camera images were obtained from patients before treatment was commenced and again at follow-up appointments after treatment had been given. The principal outcome measure was that of mathematical difference in the iris recognition templates obtained from patients' eyes before and after treatment of the eye disease. Results showed that the performance of iris recognition was remarkably resilient to most ophthalmic disease states, including corneal oedema, iridotomies (laser puncture of iris) and conjunctivitis. Problems were, however, encountered in some patients with acute inflammation of the iris (iritis/anterior uveitis). The effects of a subject developing anterior uveitis may cause current recognition systems to fail. Those developing and deploying iris recognition should be aware of the potential problems that this could cause to this key biometric technology. PMID:19324690

  17. Iris recognition in the presence of ocular disease.

    PubMed

    Aslam, Tariq Mehmood; Tan, Shi Zhuan; Dhillon, Baljean

    2009-05-06

    Iris recognition systems are among the most accurate of all biometric technologies with immense potential for use in worldwide security applications. This study examined the effect of eye pathology on iris recognition and in particular whether eye disease could cause iris recognition systems to fail. The experiment involved a prospective cohort of 54 patients with anterior segment eye disease who were seen at the acute referral unit of the Princess Alexandra Eye Pavilion in Edinburgh. Iris camera images were obtained from patients before treatment was commenced and again at follow-up appointments after treatment had been given. The principal outcome measure was that of mathematical difference in the iris recognition templates obtained from patients' eyes before and after treatment of the eye disease. Results showed that the performance of iris recognition was remarkably resilient to most ophthalmic disease states, including corneal oedema, iridotomies (laser puncture of iris) and conjunctivitis. Problems were, however, encountered in some patients with acute inflammation of the iris (iritis/anterior uveitis). The effects of a subject developing anterior uveitis may cause current recognition systems to fail. Those developing and deploying iris recognition should be aware of the potential problems that this could cause to this key biometric technology.

  18. Enhanced iris recognition method based on multi-unit iris images

    NASA Astrophysics Data System (ADS)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the averaged equal error rate of iris recognition using the proposed method was 4.3006%, which is lower than that of other methods.

  19. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  20. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification.

    PubMed

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  1. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    PubMed Central

    Rajagopal, Gayathri; Palaniswamy, Ramamoorthy

    2015-01-01

    This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis) is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database. PMID:26640813

  2. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    PubMed

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  3. Iris recognition using possibilistic fuzzy matching on local features.

    PubMed

    Tsai, Chung-Chih; Lin, Heng-Yi; Taur, Jinshiuh; Tao, Chin-Wang

    2012-02-01

    In this paper, we propose a novel possibilistic fuzzy matching strategy with invariant properties, which can provide a robust and effective matching scheme for two sets of iris feature points. In addition, the nonlinear normalization model is adopted to provide more accurate position before matching. Moreover, an effective iris segmentation method is proposed to refine the detected inner and outer boundaries to smooth curves. For feature extraction, the Gabor filters are adopted to detect the local feature points from the segmented iris image in the Cartesian coordinate system and to generate a rotation-invariant descriptor for each detected point. After that, the proposed matching algorithm is used to compute a similarity score for two sets of feature points from a pair of iris images. The experimental results show that the performance of our system is better than those of the systems based on the local features and is comparable to those of the typical systems.

  4. A gallery approach for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Yoldash, Rashiduddin; Boehnen, Christopher

    2015-05-01

    It has been proven that hamming distance score between frontal and off-angle iris images of same eye differs in iris recognition system. The distinction of hamming distance score is caused by many factors such as image acquisition angle, occlusion, pupil dilation, and limbus effect. In this paper, we first study the effect of the angle variations between iris plane and the image acquisition systems. We present how hamming distance changes for different off-angle iris images even if they are coming from the same iris. We observe that increment in acquisition angle of compared iris images causes the increment in hamming distance. Second, we propose a new technique in off-angle iris recognition system that includes creating a gallery of different off-angle iris images (such as, 0, 10, 20, 30, 40, and 50 degrees) and comparing each probe image with these gallery images. We will show the accuracy of the gallery approach for off-angle iris recognition.

  5. Noisy Ocular Recognition Based on Three Convolutional Neural Networks

    PubMed Central

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-01-01

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods. PMID:29258217

  6. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    NASA Astrophysics Data System (ADS)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  7. Toward noncooperative iris recognition: a classification approach using multiple signatures.

    PubMed

    Proença, Hugo; Alexandre, Luís A

    2007-04-01

    This paper focuses on noncooperative iris recognition, i.e., the capture of iris images at large distances, under less controlled lighting conditions, and without active participation of the subjects. This increases the probability of capturing very heterogeneous images (regarding focus, contrast, or brightness) and with several noise factors (iris obstructions and reflections). Current iris recognition systems are unable to deal with noisy data and substantially increase their error rates, especially the false rejections, in these conditions. We propose an iris classification method that divides the segmented and normalized iris image into six regions, makes an independent feature extraction and comparison for each region, and combines each of the dissimilarity values through a classification rule. Experiments show a substantial decrease, higher than 40 percent, of the false rejection rates in the recognition of noisy iris images.

  8. Challenging ocular image recognition

    NASA Astrophysics Data System (ADS)

    Pauca, V. Paúl; Forkin, Michael; Xu, Xiao; Plemmons, Robert; Ross, Arun A.

    2011-06-01

    Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition performance in the presence of non-ideal data. There are several advantages for increasing the area beyond the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is performed where iris recognition methods are contrasted with texture and point operators on existing iris and face datasets. The experimental results show a dramatic recognition performance gain when additional features are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.

  9. Use of Biometrics within Sub-Saharan Refugee Communities

    DTIC Science & Technology

    2013-12-01

    fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is

  10. Iris recognition based on robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Karn, Pradeep; He, Xiao Hai; Yang, Shuai; Wu, Xiao Hong

    2014-11-01

    Iris images acquired under different conditions often suffer from blur, occlusion due to eyelids and eyelashes, specular reflection, and other artifacts. Existing iris recognition systems do not perform well on these types of images. To overcome these problems, we propose an iris recognition method based on robust principal component analysis. The proposed method decomposes all training images into a low-rank matrix and a sparse error matrix, where the low-rank matrix is used for feature extraction. The sparsity concentration index approach is then applied to validate the recognition result. Experimental results using CASIA V4 and IIT Delhi V1iris image databases showed that the proposed method achieved competitive performances in both recognition accuracy and computational efficiency.

  11. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  12. A new approach for cancelable iris recognition

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Sui, Yan; Zhou, Zhi; Du, Yingzi; Zou, Xukai

    2010-04-01

    The iris is a stable and reliable biometric for positive human identification. However, the traditional iris recognition scheme raises several privacy concerns. One's iris pattern is permanently bound with him and cannot be changed. Hence, once it is stolen, this biometric is lost forever as well as all the applications where this biometric is used. Thus, new methods are desirable to secure the original pattern and ensure its revocability and alternatives when compromised. In this paper, we propose a novel scheme which incorporates iris features, non-invertible transformation and data encryption to achieve "cancelability" and at the same time increases iris recognition accuracy.

  13. Can we recognize horses by their ocular biometric traits using deep convolutional neural networks?

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Szadkowski, Mateusz

    2017-08-01

    This paper aims at determining the viability of horse recognition by the means of ocular biometrics and deep convolutional neural networks (deep CNNs). Fast and accurate identification of race horses before racing is crucial for ensuring that exactly the horses that were declared are participating, using methods that are non-invasive and friendly to these delicate animals. As typical iris recognition methods require lot of fine-tuning of the method parameters and high-quality data, CNNs seem like a natural candidate to be applied for recognition thanks to their potentially excellent abilities in describing texture, combined with ease of implementation in an end-to-end manner. Also, with such approach we can easily utilize both iris and periocular features without constructing complicated algorithms for each. We thus present a simple CNN classifier, able to correctly identify almost 80% of the samples in an identification scenario, and give equal error rate (EER) of less than 10% in a verification scenario.

  14. Iris recognition via plenoptic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.

    Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.

  15. Using crypts as iris minutiae

    NASA Astrophysics Data System (ADS)

    Shen, Feng; Flynn, Patrick J.

    2013-05-01

    Iris recognition is one of the most reliable biometric technologies for identity recognition and verification, but it has not been used in a forensic context because the representation and matching of iris features are not straightforward for traditional iris recognition techniques. In this paper we concentrate on the iris crypt as a visible feature used to represent the characteristics of irises in a similar way to fingerprint minutiae. The matching of crypts is based on their appearances and locations. The number of matching crypt pairs found between two irises can be used for identity verification and the convenience of manual inspection makes iris crypts a potential candidate for forensic applications.

  16. Comparative analysis of classification based algorithms for diabetes diagnosis using iris images.

    PubMed

    Samant, Piyush; Agarwal, Ravinder

    2018-01-01

    Photo-diagnosis is always an intriguing area for the researchers, with the advancement of image processing and computer machine vision techniques it have become more reliable and popular in recent years. The objective of this paper is to study the change in the features of iris, particularly irregularities in the pigmentation of certain areas of the iris with respect to diabetic health of an individual. Apart from the point that iris recognition concentrates on the overall structure of the iris, diagnostic techniques emphasises the local variations in the particular area of iris. Pre-image processing techniques have been applied to extract iris and thereafter, region of interest from the extracted iris have been cropped out. In order to observe the changes in the tissue pigmentation of region of interest, statistical, texture textural and wavelet features have been extracted. At the end, a comparison of accuracies of five different classifiers has been presented to classify two subject groups of diabetic and non-diabetic. Best classification accuracy has been calculated as 89.66% by the random forest classifier. Results have been shown the effectiveness and diagnostic significance of the proposed methodology. Presented piece of work offers a novel systemic perspective of non-invasive and automatic diabetic diagnosis.

  17. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  18. A multi-approach feature extractions for iris recognition

    NASA Astrophysics Data System (ADS)

    Sanpachai, H.; Settapong, M.

    2014-04-01

    Biometrics is a promising technique that is used to identify individual traits and characteristics. Iris recognition is one of the most reliable biometric methods. As iris texture and color is fully developed within a year of birth, it remains unchanged throughout a person's life. Contrary to fingerprint, which can be altered due to several aspects including accidental damage, dry or oily skin and dust. Although iris recognition has been studied for more than a decade, there are limited commercial products available due to its arduous requirement such as camera resolution, hardware size, expensive equipment and computational complexity. However, at the present time, technology has overcome these obstacles. Iris recognition can be done through several sequential steps which include pre-processing, features extractions, post-processing, and matching stage. In this paper, we adopted the directional high-low pass filter for feature extraction. A box-counting fractal dimension and Iris code have been proposed as feature representations. Our approach has been tested on CASIA Iris Image database and the results are considered successful.

  19. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  20. Cross-sensor iris recognition through kernel learning.

    PubMed

    Pillai, Jaishanker K; Puertas, Maria; Chellappa, Rama

    2014-01-01

    Due to the increasing popularity of iris biometrics, new sensors are being developed for acquiring iris images and existing ones are being continuously upgraded. Re-enrolling users every time a new sensor is deployed is expensive and time-consuming, especially in applications with a large number of enrolled users. However, recent studies show that cross-sensor matching, where the test samples are verified using data enrolled with a different sensor, often lead to reduced performance. In this paper, we propose a machine learning technique to mitigate the cross-sensor performance degradation by adapting the iris samples from one sensor to another. We first present a novel optimization framework for learning transformations on iris biometrics. We then utilize this framework for sensor adaptation, by reducing the distance between samples of the same class, and increasing it between samples of different classes, irrespective of the sensors acquiring them. Extensive evaluations on iris data from multiple sensors demonstrate that the proposed method leads to improvement in cross-sensor recognition accuracy. Furthermore, since the proposed technique requires minimal changes to the iris recognition pipeline, it can easily be incorporated into existing iris recognition systems.

  1. Evaluation of iris recognition system for wavefront-guided laser in situ keratomileusis for myopic astigmatism.

    PubMed

    Ghosh, Sudipta; Couper, Terry A; Lamoureux, Ecosse; Jhanji, Vishal; Taylor, Hugh R; Vajpayee, Rasik B

    2008-02-01

    To evaluate the visual and refractive outcomes of wavefront-guided laser in situ keratomileusis (LASIK) using an iris recognition system for the correction of myopic astigmatism. Centre for Eye Research Australia, Melbourne Excimer Laser Research Group, and Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia. A comparative analysis of wavefront-guided LASIK was performed with an iris recognition system (iris recognition group) and without iris recognition (control group). The main parameters were uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity, amount of residual cylinder, manifest spherical equivalent (SE), and the index of success using the Alpins method of astigmatism analysis 1 and 3 months postoperatively. A P value less than 0.05 was considered statistically significant. Preoperatively, the mean SE was -4.32 diopters (D) +/- 1.59 (SD) in the iris recognition group (100 eyes) and -4.55 +/- 1.87 D in the control group (98 eyes) (P = .84). At 3 months, the mean SE was -0.05 +/- 0.21 D and -0.20 +/- 0.40 D, respectively (P = .001), and an SE within +/-0.50 D of emmetropia was achieved in 92.0% and 85.7% of eyes, respectively (P = .07). At 3 months, the UCVA was 20/20 or better in 90.0% and 76.5% of eyes, respectively. A statistically significant difference in the amount of astigmatic correction was seen between the 2 groups (P = .00 and P = .01 at 1 and 3 months, respectively). The index of success was 98.0% in the iris recognition group and 81.6% in the control group (P = .03). Iris recognition software may achieve better visual and refractive outcomes in wavefront-guided LASIK for myopic astigmatism.

  2. Limbus Impact on Off-angle Iris Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    The accuracy of iris recognition depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. Off-angle iris recognition is a new research focus in biometrics that tries to address several issues including corneal refraction, complex 3D iris texture, and blur. In this paper, we present an additional significant challenge that degrades the performance of the off-angle iris recognition systems, called the limbus effect . The limbus is the region at the border of the cornea where the cornea joins the sclera. The limbus is a semitransparent tissue that occludes amore » side portion of the iris plane. The amount of occluded iris texture on the side nearest the camera increases as the image acquisition angle increases. Without considering the role of the limbus effect, it is difficult to design an accurate off-angle iris recognition system. To the best of our knowledge, this is the first work that investigates the limbus effect in detail from a biometrics perspective. Based on results from real images and simulated experiments with real iris texture, the limbus effect increases the hamming distance score between frontal and off-angle iris images ranging from 0.05 to 0.2 depending upon the limbus height.« less

  3. Extending the Capture Volume of an Iris Recognition System Using Wavefront Coding and Super-Resolution.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen

    2016-12-01

    Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.

  4. Ordinal measures for iris recognition.

    PubMed

    Sun, Zhenan; Tan, Tieniu

    2009-12-01

    Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.

  5. Optimal wavelength band clustering for multispectral iris recognition.

    PubMed

    Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi

    2012-07-01

    This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.

  6. The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.

    PubMed

    Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A

    2010-08-01

    The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.

  7. The Potential of Using Brain Images for Authentication

    PubMed Central

    Zhou, Zongtan; Shen, Hui; Hu, Dewen

    2014-01-01

    Biometric recognition (also known as biometrics) refers to the automated recognition of individuals based on their biological or behavioral traits. Examples of biometric traits include fingerprint, palmprint, iris, and face. The brain is the most important and complex organ in the human body. Can it be used as a biometric trait? In this study, we analyze the uniqueness of the brain and try to use the brain for identity authentication. The proposed brain-based verification system operates in two stages: gray matter extraction and gray matter matching. A modified brain segmentation algorithm is implemented for extracting gray matter from an input brain image. Then, an alignment-based matching algorithm is developed for brain matching. Experimental results on two data sets show that the proposed brain recognition system meets the high accuracy requirement of identity authentication. Though currently the acquisition of the brain is still time consuming and expensive, brain images are highly unique and have the potential possibility for authentication in view of pattern recognition. PMID:25126604

  8. The potential of using brain images for authentication.

    PubMed

    Chen, Fanglin; Zhou, Zongtan; Shen, Hui; Hu, Dewen

    2014-01-01

    Biometric recognition (also known as biometrics) refers to the automated recognition of individuals based on their biological or behavioral traits. Examples of biometric traits include fingerprint, palmprint, iris, and face. The brain is the most important and complex organ in the human body. Can it be used as a biometric trait? In this study, we analyze the uniqueness of the brain and try to use the brain for identity authentication. The proposed brain-based verification system operates in two stages: gray matter extraction and gray matter matching. A modified brain segmentation algorithm is implemented for extracting gray matter from an input brain image. Then, an alignment-based matching algorithm is developed for brain matching. Experimental results on two data sets show that the proposed brain recognition system meets the high accuracy requirement of identity authentication. Though currently the acquisition of the brain is still time consuming and expensive, brain images are highly unique and have the potential possibility for authentication in view of pattern recognition.

  9. Ocular biometrics by score-level fusion of disparate experts.

    PubMed

    Proença, Hugo

    2014-12-01

    The concept of periocular biometrics emerged to improve the robustness of iris recognition to degraded data. Being a relatively recent topic, most of the periocular recognition algorithms work in a holistic way and apply a feature encoding/matching strategy without considering each biological component in the periocular area. This not only augments the correlation between the components in the resulting biometric signature, but also increases the sensitivity to particular data covariates. The main novelty in this paper is to propose a periocular recognition ensemble made of two disparate components: 1) one expert analyses the iris texture and exhaustively exploits the multispectral information in visible-light data and 2) another expert parameterizes the shape of eyelids and defines a surrounding dimensionless region-of-interest, from where statistics of the eyelids, eyelashes, and skin wrinkles/furrows are encoded. Both experts work on disjoint regions of the periocular area and meet three important properties. First, they produce practically independent responses, which is behind the better performance of the ensemble when compared to the best individual recognizer. Second, they do not share particularly sensitivity to any image covariate, which accounts for augmenting the robustness against degraded data. Finally, it should be stressed that we disregard information in the periocular region that can be easily forged (e.g., shape of eyebrows), which constitutes an active anticounterfeit measure. An empirical evaluation was conducted on two public data sets (FRGC and UBIRIS.v2), and points for consistent improvements in performance of the proposed ensemble over the state-of-the-art periocular recognition algorithms.

  10. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  11. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  12. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  13. Comparison between extreme learning machine and wavelet neural networks in data classification

    NASA Astrophysics Data System (ADS)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  14. Reliability of automatic biometric iris recognition after phacoemulsification or drug-induced pupil dilation.

    PubMed

    Seyeddain, Orang; Kraker, Hannes; Redlberger, Andreas; Dexl, Alois K; Grabner, Günther; Emesz, Martin

    2014-01-01

    To investigate the reliability of a biometric iris recognition system for personal authentication after cataract surgery or iatrogenic pupil dilation. This was a prospective, nonrandomized, single-center, cohort study for evaluating the performance of an iris recognition system 2-24 hours after phacoemulsification and intraocular lens implantation (group 1) and before and after iatrogenic pupil dilation (group 2). Of the 173 eyes that could be enrolled before cataract surgery, 164 (94.8%) were easily recognized postoperatively, whereas in 9 (5.2%) this was not possible. However, these 9 eyes could be reenrolled and afterwards recognized successfully. In group 2, of a total of 184 eyes that were enrolled in miosis, a total of 22 (11.9%) could not be recognized in mydriasis and therefore needed reenrollment. No single case of false-positive acceptance occurred in either group. The results of this trial indicate that standard cataract surgery seems not to be a limiting factor for iris recognition in the large majority of cases. Some patients (5.2% in this study) might need "reenrollment" after cataract surgery. Iris recognition was primarily successful in eyes with medically dilated pupils in nearly 9 out of 10 eyes. No single case of false-positive acceptance occurred in either group in this trial. It seems therefore that iris recognition is a valid biometric method in the majority of cases after cataract surgery or after pupil dilation.

  15. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition

    PubMed Central

    Sánchez, Daniela; Melin, Patricia

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition. PMID:28894461

  16. A Grey Wolf Optimizer for Modular Granular Neural Networks for Human Recognition.

    PubMed

    Sánchez, Daniela; Melin, Patricia; Castillo, Oscar

    2017-01-01

    A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.

  17. Iris-based medical analysis by geometric deformation features.

    PubMed

    Ma, Lin; Zhang, D; Li, Naimin; Cai, Yan; Zuo, Wangmeng; Wang, Kuanguan

    2013-01-01

    Iris analysis studies the relationship between human health and changes in the anatomy of the iris. Apart from the fact that iris recognition focuses on modeling the overall structure of the iris, iris diagnosis emphasizes the detecting and analyzing of local variations in the characteristics of irises. This paper focuses on studying the geometrical structure changes in irises that are caused by gastrointestinal diseases, and on measuring the observable deformations in the geometrical structures of irises that are related to roundness, diameter and other geometric forms of the pupil and the collarette. Pupil and collarette based features are defined and extracted. A series of experiments are implemented on our experimental pathological iris database, including manual clustering of both normal and pathological iris images, manual classification by non-specialists, manual classification by individuals with a medical background, classification ability verification for the proposed features, and disease recognition by applying the proposed features. The results prove the effectiveness and clinical diagnostic significance of the proposed features and a reliable recognition performance for automatic disease diagnosis. Our research results offer a novel systematic perspective for iridology studies and promote the progress of both theoretical and practical work in iris diagnosis.

  18. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  19. Effects of Iris Surface Curvature on Iris Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Joseph T; Flynn, Patrick J; Bowyer, Kevin W

    To focus on objects at various distances, the lens of the eye must change shape to adjust its refractive power. This change in lens shape causes a change in the shape of the iris surface which can be measured by examining the curvature of the iris. This work isolates the variable of iris curvature in the recognition process and shows that differences in iris curvature degrade matching ability. To our knowledge, no other work has examined the effects of varying iris curvature on matching ability. To examine this degradation, we conduct a matching experiment across pairs of images with varyingmore » degrees of iris curvature differences. The results show a statistically signi cant degradation in matching ability. Finally, the real world impact of these ndings is discussed« less

  20. Iris recognition as a biometric method after cataract surgery

    PubMed Central

    Roizenblatt, Roberto; Schor, Paulo; Dante, Fabio; Roizenblatt, Jaime; Belfort, Rubens

    2004-01-01

    Background Biometric methods are security technologies, which use human characteristics for personal identification. Iris recognition systems use iris textures as unique identifiers. This paper presents an analysis of the verification of iris identities after intra-ocular procedures, when individuals were enrolled before the surgery. Methods Fifty-five eyes from fifty-five patients had their irises enrolled before a cataract surgery was performed. They had their irises verified three times before and three times after the procedure, and the Hamming (mathematical) distance of each identification trial was determined, in a controlled ideal biometric environment. The mathematical difference between the iris code before and after the surgery was also compared to a subjective evaluation of the iris anatomy alteration by an experienced surgeon. Results A correlation between visible subjective iris texture alteration and mathematical difference was verified. We found only six cases in which the eye was no more recognizable, but these eyes were later reenrolled. The main anatomical changes that were found in the new impostor eyes are described. Conclusions Cataract surgeries change iris textures in such a way that iris recognition systems, which perform mathematical comparisons of textural biometric features, are able to detect these changes and sometimes even discard a pre-enrolled iris considering it an impostor. In our study, re-enrollment proved to be a feasible procedure. PMID:14748929

  1. Iris recognition as a biometric method after cataract surgery.

    PubMed

    Roizenblatt, Roberto; Schor, Paulo; Dante, Fabio; Roizenblatt, Jaime; Belfort, Rubens

    2004-01-28

    Biometric methods are security technologies, which use human characteristics for personal identification. Iris recognition systems use iris textures as unique identifiers. This paper presents an analysis of the verification of iris identities after intra-ocular procedures, when individuals were enrolled before the surgery. Fifty-five eyes from fifty-five patients had their irises enrolled before a cataract surgery was performed. They had their irises verified three times before and three times after the procedure, and the Hamming (mathematical) distance of each identification trial was determined, in a controlled ideal biometric environment. The mathematical difference between the iris code before and after the surgery was also compared to a subjective evaluation of the iris anatomy alteration by an experienced surgeon. A correlation between visible subjective iris texture alteration and mathematical difference was verified. We found only six cases in which the eye was no more recognizable, but these eyes were later reenrolled. The main anatomical changes that were found in the new impostor eyes are described. Cataract surgeries change iris textures in such a way that iris recognition systems, which perform mathematical comparisons of textural biometric features, are able to detect these changes and sometimes even discard a pre-enrolled iris considering it an impostor. In our study, re-enrollment proved to be a feasible procedure.

  2. Real-time image restoration for iris recognition systems.

    PubMed

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  3. NATIONAL PREPAREDNESS: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy

    DTIC Science & Technology

    2002-06-07

    Continue to Develop and Refine Emerging Technology • Some of the emerging biometric devices, such as iris scans and facial recognition systems...such as iris scans and facial recognition systems, facial recognition systems, and speaker verification systems. (976301)

  4. Extending the imaging volume for biometric iris recognition.

    PubMed

    Narayanswamy, Ramkumar; Johnson, Gregory E; Silveira, Paulo E X; Wach, Hans B

    2005-02-10

    The use of the human iris as a biometric has recently attracted significant interest in the area of security applications. The need to capture an iris without active user cooperation places demands on the optical system. Unlike a traditional optical design, in which a large imaging volume is traded off for diminished imaging resolution and capacity for collecting light, Wavefront Coded imaging is a computational imaging technology capable of expanding the imaging volume while maintaining an accurate and robust iris identification capability. We apply Wavefront Coded imaging to extend the imaging volume of the iris recognition application.

  5. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    NASA Astrophysics Data System (ADS)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  6. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  7. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    NASA Astrophysics Data System (ADS)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  8. Iris recognition and what is next? Iris diagnosis: a new challenging topic for machine vision from image acquisition to image interpretation

    NASA Astrophysics Data System (ADS)

    Perner, Petra

    2017-03-01

    Molecular image-based techniques are widely used in medicine to detect specific diseases. Look diagnosis is an important issue but also the analysis of the eye plays an important role in order to detect specific diseases. These topics are important topics in medicine and the standardization of these topics by an automatic system can be a new challenging field for machine vision. Compared to iris recognition has the iris diagnosis much more higher demands for the image acquisition and interpretation of the iris. One understands by iris diagnosis (Iridology) the investigation and analysis of the colored part of the eye, the iris, to discover factors, which play an important role for the prevention and treatment of illnesses, but also for the preservation of an optimum health. An automatic system would pave the way for a much wider use of the iris diagnosis for the diagnosis of illnesses and for the purpose of individual health protection. With this paper, we describe our work towards an automatic iris diagnosis system. We describe the image acquisition and the problems with it. Different ways are explained for image acquisition and image preprocessing. We describe the image analysis method for the detection of the iris. The meta-model for image interpretation is given. Based on this model we show the many tasks for image analysis that range from different image-object feature analysis, spatial image analysis to color image analysis. Our first results for the recognition of the iris are given. We describe how detecting the pupil and not wanted lamp spots. We explain how to recognize orange blue spots in the iris and match them against the topological map of the iris. Finally, we give an outlook for further work.

  9. A field study of the accuracy and reliability of a biometric iris recognition system.

    PubMed

    Latman, Neal S; Herb, Emily

    2013-06-01

    The iris of the eye appears to satisfy the criteria for a good anatomical characteristic for use in a biometric system. The purpose of this study was to evaluate a biometric iris recognition system: Mobile-Eyes™. The enrollment, verification, and identification applications were evaluated in a field study for accuracy and reliability using both irises of 277 subjects. Independent variables included a wide range of subject demographics, ambient light, and ambient temperature. A sub-set of 35 subjects had alcohol-induced nystagmus. There were 2710 identification and verification attempts, which resulted in 1,501,340 and 5540 iris comparisons respectively. In this study, the system successfully enrolled all subjects on the first attempt. All 277 subjects were successfully verified and identified on the first day of enrollment. None of the current or prior eye conditions prevented enrollment, verification, or identification. All 35 subjects with alcohol-induced nystagmus were successfully verified and identified. There were no false verifications or false identifications. Two conditions were identified that potentially could circumvent the use of iris recognitions systems in general. The Mobile-Eyes™ iris recognition system exhibited accurate and reliable enrollment, verification, and identification applications in this study. It may have special applications in subjects with nystagmus. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code apertured imaging system, where the imaging volume was 2.57 times extended over the traditional optics, while keeping sufficient recognition accuracy.

  11. New methods in iris recognition.

    PubMed

    Daugman, John

    2007-10-01

    This paper presents the following four advances in iris recognition: 1) more disciplined methods for detecting and faithfully modeling the iris inner and outer boundaries with active contours, leading to more flexible embedded coordinate systems; 2) Fourier-based methods for solving problems in iris trigonometry and projective geometry, allowing off-axis gaze to be handled by detecting it and "rotating" the eye into orthographic perspective; 3) statistical inference methods for detecting and excluding eyelashes; and 4) exploration of score normalizations, depending on the amount of iris data that is available in images and the required scale of database search. Statistical results are presented based on 200 billion iris cross-comparisons that were generated from 632500 irises in the United Arab Emirates database to analyze the normalization issues raised in different regions of receiver operating characteristic curves.

  12. Iris Cryptography for Security Purpose

    NASA Astrophysics Data System (ADS)

    Ajith, Srighakollapu; Balaji Ganesh Kumar, M.; Latha, S.; Samiappan, Dhanalakshmi; Muthu, P.

    2018-04-01

    In today's world, the security became the major issue to every human being. A major issue is hacking as hackers are everywhere, as the technology was developed still there are many issues where the technology fails to meet the security. Engineers, scientists were discovering the new products for security purpose as biometrics sensors like face recognition, pattern recognition, gesture recognition, voice authentication etcetera. But these devices fail to reach the expected results. In this work, we are going to present an approach to generate a unique secure key using the iris template. Here the iris templates are processed using the well-defined processing techniques. Using the encryption and decryption process they are stored, traversed and utilized. As of the work, we can conclude that the iris cryptography gives us the expected results for securing the data from eavesdroppers.

  13. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  14. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  15. Iris Segmentation and Normalization Algorithm Based on Zigzag Collarette

    NASA Astrophysics Data System (ADS)

    Rizky Faundra, M.; Ratna Sulistyaningrum, Dwi

    2017-01-01

    In this paper, we proposed iris segmentation and normalization algorithm based on the zigzag collarette. First of all, iris images are processed by using Canny Edge Detection to detect pupil edge, then finding the center and the radius of the pupil with the Hough Transform Circle. Next, isolate important part in iris based zigzag collarette area. Finally, Daugman Rubber Sheet Model applied to get the fixed dimensions or normalization iris by transforming cartesian into polar format and thresholding technique to remove eyelid and eyelash. This experiment will be conducted with a grayscale eye image data taken from a database of iris-Chinese Academy of Sciences Institute of Automation (CASIA). Data iris taken is the data reliable and widely used to study the iris biometrics. The result show that specific threshold level is 0.3 have better accuracy than other, so the present algorithm can be used to segmentation and normalization zigzag collarette with accuracy is 98.88%

  16. Deep learning architecture for iris recognition based on optimal Gabor filters and deep belief network

    NASA Astrophysics Data System (ADS)

    He, Fei; Han, Ye; Wang, Han; Ji, Jinchao; Liu, Yuanning; Ma, Zhiqiang

    2017-03-01

    Gabor filters are widely utilized to detect iris texture information in several state-of-the-art iris recognition systems. However, the proper Gabor kernels and the generative pattern of iris Gabor features need to be predetermined in application. The traditional empirical Gabor filters and shallow iris encoding ways are incapable of dealing with such complex variations in iris imaging including illumination, aging, deformation, and device variations. Thereby, an adaptive Gabor filter selection strategy and deep learning architecture are presented. We first employ particle swarm optimization approach and its binary version to define a set of data-driven Gabor kernels for fitting the most informative filtering bands, and then capture complex pattern from the optimal Gabor filtered coefficients by a trained deep belief network. A succession of comparative experiments validate that our optimal Gabor filters may produce more distinctive Gabor coefficients and our iris deep representations be more robust and stable than traditional iris Gabor codes. Furthermore, the depth and scales of the deep learning architecture are also discussed.

  17. [Cyclorotation of the eye in wavefront-guided LASIK using a static eyetracker with iris recognition].

    PubMed

    Kohnen, T; Kühne, C; Cichocki, M; Strenger, A

    2007-01-01

    Centration of the ablation zone decisively influences the result of wavefront-guided LASIK. Cyclorotation of the eye occurs as the patient changes from the sitting position during aberrometry to the supine position during laser surgery and may lead to induction of lower and higher order aberrations. Twenty patients (40 eyes) underwent wavefront-guided LASIK (B&L 217z 100 excimer laser) with a static eyetracker driven by iris recognition (mean preoperative SE: -4.72+/-1.45 D; range: -1.63 to -7.00 D). The iris patterns of the patients' eyes were memorized during aberrometry and after flap creation. The mean absolute value of the measured cyclorotation was -1.5+/-4.2 degrees (range: -11.0 to 6.9 degrees ). The mean cyclorotation was 3.5+/-2.7 masculine (range: 0.1 to 11.0 degrees ). In 65% of all eyes cyclorotation was >2 masculine. A static eyetracker driven by iris recognition demonstrated that cyclorotation of up to 11 degrees may occur in myopic and myopic astigmatic eyes when changing from a sitting to a supine position. Use of static eyetrackers with iris recognition may provide a more precise positioning of the ablation profile as they detect and compensate cyclorotation.

  18. Contralateral comparison of wavefront-guided LASIK surgery with iris recognition versus without iris recognition using the MEL80 Excimer laser system.

    PubMed

    Wu, Fang; Yang, Yabo; Dougherty, Paul J

    2009-05-01

    To compare outcomes in wavefront-guided LASIK performed with iris recognition software versus without iris recognition software in different eyes of the same patient. A randomised, prospective study of 104 myopic eyes of 52 patients undergoing LASIK surgery with the MEL80 excimer laser system was performed. Iris recognition software was used in one eye of each patient (study group) and not used in the other eye (control group). Higher order aberrations (HOAs), contrast sensitivity, uncorrected vision (UCV), visual acuity (VA) and corneal topography were measured and recorded pre-operatively and at one month and three months post-operatively for each eye. The mean post-operative sphere and cylinder between groups was similar, however the post-operative angles of error (AE) by refraction were significantly smaller in the study group compared to the control group both in arithmetic and absolute means (p = 0.03, p = 0.01). The mean logMAR UCV was significantly better in the study group than in the control group at one month (p = 0.01). The mean logMAR VA was significantly better in the study group than in control group at both one and three months (p = 0.01, p = 0.03). In addition, mean trefoil, total third-order aberration, total fourth-order aberration and the total scotopic root-mean-square (RMS) HOAs were significantly less in the study group than those in the control group at the third (p = 0.01, p = 0.05, p = 0.04, p = 0.02). By three months, the contrast sensitivity had recovered in both groups but the study group performed better at 2.6, 4.2 and 6.6 cpd (cycles per degree) than the control group (p = 0.01, p < 0.01, p = 0.01). LASIK performed with iris recognition results in better VA, lower mean higher-order aberrations, lower refractive post-operative angles of error and better contrast sensitivity at three months post-operatively than LASIK performed without iris recognition.

  19. Technical issues for the eye image database creation at distance

    NASA Astrophysics Data System (ADS)

    Oropesa Morales, Lester Arturo; Maldonado Cano, Luis Alejandro; Soto Aldaco, Andrea; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Rodríguez Vázquez, Manuel Antonio; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Biometrics refers to identify people through their physical characteristics or behavior such as fingerprints, face, DNA, hand geometries, retina and iris patterns. Typically, the iris pattern is to acquire in short distance to recognize a person, however, in the past few years is a challenge identify a person by its iris pattern at certain distance in non-cooperative environments. This challenge comprises: 1) high quality iris image, 2) light variation, 3) blur reduction, 4) specular reflections reduction, 5) the distance from the acquisition system to the user, and 6) standardize the iris size and the density pixel of iris texture. The solution of the challenge will add robustness and enhance the iris recognition rates. For this reason, we describe the technical issues that must be considered during iris acquisition. Some of these considerations are the camera sensor, lens, the math analysis of depth of field (DOF) and field of view (FOV) for iris recognition. Finally, based on this issues we present experiment that show the result of captures obtained with our camera at distance and captures obtained with cameras in very short distance.

  20. Cataract influence on iris recognition performance

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Czajka, Adam; Maciejewicz, Piotr

    2014-11-01

    This paper presents the experimental study revealing weaker performance of the automatic iris recognition methods for cataract-affected eyes when compared to healthy eyes. There is little research on the topic, mostly incorporating scarce databases that are often deficient in images representing more than one illness. We built our own database, acquiring 1288 eye images of 37 patients of the Medical University of Warsaw. Those images represent several common ocular diseases, such as cataract, along with less ordinary conditions, such as iris pattern alterations derived from illness or eye trauma. Images were captured in near-infrared light (used in biometrics) and for selected cases also in visible light (used in ophthalmological diagnosis). Since cataract is a disorder that is most populated by samples in the database, in this paper we focus solely on this illness. To assess the extent of the performance deterioration we use three iris recognition methodologies (commercial and academic solutions) to calculate genuine match scores for healthy eyes and those influenced by cataract. Results show a significant degradation in iris recognition reliability manifesting by worsening the genuine scores in all three matchers used in this study (12% of genuine score increase for an academic matcher, up to 175% of genuine score increase obtained for an example commercial matcher). This increase in genuine scores affected the final false non-match rate in two matchers. To our best knowledge this is the only study of such kind that employs more than one iris matcher, and analyzes the iris image segmentation as a potential source of decreased reliability

  1. Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.

    PubMed

    Pillai, Jaishanker K; Patel, Vishal M; Chellappa, Rama; Ratha, Nalini K

    2011-09-01

    Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach.

  2. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao

    2016-11-25

    For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known "extended DoF" (EDoF) technique, or "wavefront coding," by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy.

  3. Effect of cataract surgery and pupil dilation on iris pattern recognition for personal authentication.

    PubMed

    Dhir, L; Habib, N E; Monro, D M; Rakshit, S

    2010-06-01

    The purpose of this study was to investigate the effect of cataract surgery and pupil dilation on iris pattern recognition for personal authentication. Prospective non-comparative cohort study. Images of 15 subjects were captured before (enrolment), and 5, 10, and 15 min after instillation of mydriatics before routine cataract surgery. After cataract surgery, images were captured 2 weeks thereafter. Enrolled and test images (after pupillary dilation and after cataract surgery) were segmented to extract the iris. This was then unwrapped onto a rectangular format for normalization and a novel method using the Discrete Cosine Transform was applied to encode the image into binary bits. The numerical difference between two iris codes (Hamming distance, HD) was calculated. The HD between identification and enrolment codes was used as a score and was compared with a confidence threshold for specific equipment, giving a match or non-match result. The Correct Recognition Rate (CRR) and Equal Error Rates (EERs) were calculated to analyse overall system performance. After cataract surgery, perfect identification and verification was achieved, with zero false acceptance rate, zero false rejection rate, and zero EER. After pupillary dilation, non-elastic deformation occurs and a CRR of 86.67% and EER of 9.33% were obtained. Conventional circle-based localization methods are inadequate. Matching reliability decreases considerably with increase in pupillary dilation. Cataract surgery has no effect on iris pattern recognition, whereas pupil dilation may be used to defeat an iris-based authentication system.

  4. Improving energy efficiency in handheld biometric applications

    NASA Astrophysics Data System (ADS)

    Hoyle, David C.; Gale, John W.; Schultz, Robert C.; Rakvic, Ryan N.; Ives, Robert W.

    2012-06-01

    With improved smartphone and tablet technology, it is becoming increasingly feasible to implement powerful biometric recognition algorithms on portable devices. Typical iris recognition algorithms, such as Ridge Energy Direction (RED), utilize two-dimensional convolution in their implementation. This paper explores the energy consumption implications of 12 different methods of implementing two-dimensional convolution on a portable device. Typically, convolution is implemented using floating point operations. If a given algorithm implemented integer convolution vice floating point convolution, it could drastically reduce the energy consumed by the processor. The 12 methods compared include 4 major categories: Integer C, Integer Java, Floating Point C, and Floating Point Java. Each major category is further divided into 3 implementations: variable size looped convolution, static size looped convolution, and unrolled looped convolution. All testing was performed using the HTC Thunderbolt with energy measured directly using a Tektronix TDS5104B Digital Phosphor oscilloscope. Results indicate that energy savings as high as 75% are possible by using Integer C versus Floating Point C. Considering the relative proportion of processing time that convolution is responsible for in a typical algorithm, the savings in energy would likely result in significantly greater time between battery charges.

  5. Adaptive fuzzy leader clustering of complex data sets in pattern recognition

    NASA Technical Reports Server (NTRS)

    Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda

    1992-01-01

    A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.

  6. Test of the Practicality and Feasibility of EDoF-Empowered Image Sensors for Long-Range Biometrics

    PubMed Central

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao

    2016-01-01

    For many practical applications of image sensors, how to extend the depth-of-field (DoF) is an important research topic; if successfully implemented, it could be beneficial in various applications, from photography to biometrics. In this work, we want to examine the feasibility and practicability of a well-known “extended DoF” (EDoF) technique, or “wavefront coding,” by building real-time long-range iris recognition and performing large-scale iris recognition. The key to the success of long-range iris recognition includes long DoF and image quality invariance toward various object distance, which is strict and harsh enough to test the practicality and feasibility of EDoF-empowered image sensors. Besides image sensor modification, we also explored the possibility of varying enrollment/testing pairs. With 512 iris images from 32 Asian people as the database, 400-mm focal length and F/6.3 optics over 3 m working distance, our results prove that a sophisticated coding design scheme plus homogeneous enrollment/testing setups can effectively overcome the blurring caused by phase modulation and omit Wiener-based restoration. In our experiments, which are based on 3328 iris images in total, the EDoF factor can achieve a result 3.71 times better than the original system without a loss of recognition accuracy. PMID:27897976

  7. Point spread function engineering for iris recognition system design.

    PubMed

    Ashok, Amit; Neifeld, Mark A

    2010-04-01

    Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.

  8. Face-iris multimodal biometric scheme based on feature level fusion

    NASA Astrophysics Data System (ADS)

    Huo, Guang; Liu, Yuanning; Zhu, Xiaodong; Dong, Hongxing; He, Fei

    2015-11-01

    Unlike score level fusion, feature level fusion demands all the features extracted from unimodal traits with high distinguishability, as well as homogeneity and compatibility, which is difficult to achieve. Therefore, most multimodal biometric research focuses on score level fusion, whereas few investigate feature level fusion. We propose a face-iris recognition method based on feature level fusion. We build a special two-dimensional-Gabor filter bank to extract local texture features from face and iris images, and then transform them by histogram statistics into an energy-orientation variance histogram feature with lower dimensions and higher distinguishability. Finally, through a fusion-recognition strategy based on principal components analysis and support vector machine (FRSPS), feature level fusion and one-to-n identification are accomplished. The experimental results demonstrate that this method can not only effectively extract face and iris features but also provide higher recognition accuracy. Compared with some state-of-the-art fusion methods, the proposed method has a significant performance advantage.

  9. Joint Feature Extraction and Classifier Design for ECG-Based Biometric Recognition.

    PubMed

    Gutta, Sandeep; Cheng, Qi

    2016-03-01

    Traditional biometric recognition systems often utilize physiological traits such as fingerprint, face, iris, etc. Recent years have seen a growing interest in electrocardiogram (ECG)-based biometric recognition techniques, especially in the field of clinical medicine. In existing ECG-based biometric recognition methods, feature extraction and classifier design are usually performed separately. In this paper, a multitask learning approach is proposed, in which feature extraction and classifier design are carried out simultaneously. Weights are assigned to the features within the kernel of each task. We decompose the matrix consisting of all the feature weights into sparse and low-rank components. The sparse component determines the features that are relevant to identify each individual, and the low-rank component determines the common feature subspace that is relevant to identify all the subjects. A fast optimization algorithm is developed, which requires only the first-order information. The performance of the proposed approach is demonstrated through experiments using the MIT-BIH Normal Sinus Rhythm database.

  10. Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.

    PubMed

    Proença, Hugo

    2010-08-01

    Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.

  11. Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata.

    PubMed

    Ghanizadeh, Afshin; Abarghouei, Amir Atapour; Sinaie, Saman; Saad, Puteh; Shamsuddin, Siti Mariyam

    2011-07-01

    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.

  12. Survey of Technologies for the Airport Border of the Future

    DTIC Science & Technology

    2014-04-01

    geometry Handwriting recognition ID cards Image classification Image enhancement Image fusion Image matching Image processing Image segmentation Iris...00 Tongue print Footstep recognition Odour recognition Retinal recognition Emotion recognition Periocular recognition Handwriting recognition Ear...recognition Palmprint recognition Hand geometry DNA matching Vein matching Ear recognition Handwriting recognition Periocular recognition Emotion

  13. Comments on the CASIA version 1.0 iris data set.

    PubMed

    Phillips, P Jonathon; Bowyer, Kevin W; Flynn, Patrick J

    2007-10-01

    We note that the images in the CASIA version 1.0 iris dataset have been edited so that the pupil area is replaced by a circular region of uniform intensity. We recommend that this dataset is no longer used in iris biometrics research, unless there this a compelling reason that takes into account the nature of the images. In addition, based on our experience with the Iris Challenge Evaluation (ICE) 2005 technology development project, we make recommendations for reporting results of iris recognition experiments.

  14. Computational cameras for moving iris recognition

    NASA Astrophysics Data System (ADS)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  15. On a methodology for robust segmentation of nonideal iris images.

    PubMed

    Schmid, Natalia A; Zuo, Jinyu

    2010-06-01

    Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.

  16. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    PubMed Central

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  17. [Electronic Device for Retinal and Iris Imaging].

    PubMed

    Drahanský, M; Kolář, R; Mňuk, T

    This paper describes design and construction of a new device for automatic capturing of eye retina and iris. This device has two possible ways of utilization - either for biometric purposes (persons recognition on the base of their eye characteristics) or for medical purposes as supporting diagnostic device. eye retina, eye iris, device, acquisition, image.

  18. A Novel Anti-Spoofing Solution for Iris Recognition Toward Cosmetic Contact Lens Attack Using Spectral ICA Analysis.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Wang, Wei; Tien, Chung-Hao

    2018-03-06

    In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme.

  19. Increasing the information acquisition volume in iris recognition systems.

    PubMed

    Barwick, D Shane

    2008-09-10

    A significant hurdle for the widespread adoption of iris recognition in security applications is that the typically small imaging volume for eye placement results in systems that are not user friendly. Separable cubic phase plates at the lens pupil have been shown to ameliorate this disadvantage by increasing the depth of field. However, these phase masks have limitations on how efficiently they can capture the information-bearing spatial frequencies in iris images. The performance gains in information acquisition that can be achieved by more general, nonseparable phase masks is demonstrated. A detailed design method is presented, and simulations using representative designs allow for performance comparisons.

  20. Video-based noncooperative iris image segmentation.

    PubMed

    Du, Yingzi; Arslanturk, Emrah; Zhou, Zhi; Belcher, Craig

    2011-02-01

    In this paper, we propose a video-based noncooperative iris image segmentation scheme that incorporates a quality filter to quickly eliminate images without an eye, employs a coarse-to-fine segmentation scheme to improve the overall efficiency, uses a direct least squares fitting of ellipses method to model the deformed pupil and limbic boundaries, and develops a window gradient-based method to remove noise in the iris region. A remote iris acquisition system is set up to collect noncooperative iris video images. An objective method is used to quantitatively evaluate the accuracy of the segmentation results. The experimental results demonstrate the effectiveness of this method. The proposed method would make noncooperative iris recognition or iris surveillance possible.

  1. A Novel Anti-Spoofing Solution for Iris Recognition Toward Cosmetic Contact Lens Attack Using Spectral ICA Analysis

    PubMed Central

    Hsieh, Sheng-Hsun; Wang, Wei; Tien, Chung-Hao

    2018-01-01

    In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme. PMID:29509692

  2. Dynamic Features for Iris Recognition.

    PubMed

    da Costa, R M; Gonzaga, A

    2012-08-01

    The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called "dynamic features (DFs)." This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be "fraud-proof," because these DFs can only be extracted from living irises.

  3. A statistical investigation into the stability of iris recognition in diverse population sets

    NASA Astrophysics Data System (ADS)

    Howard, John J.; Etter, Delores M.

    2014-05-01

    Iris recognition is increasingly being deployed on population wide scales for important applications such as border security, social service administration, criminal identification and general population management. The error rates for this incredibly accurate form of biometric identification are established using well known, laboratory quality datasets. However, it is has long been acknowledged in biometric theory that not all individuals have the same likelihood of being correctly serviced by a biometric system. Typically, techniques for identifying clients that are likely to experience a false non-match or a false match error are carried out on a per-subject basis. This research makes the novel hypothesis that certain ethnical denominations are more or less likely to experience a biometric error. Through established statistical techniques, we demonstrate this hypothesis to be true and document the notable effect that the ethnicity of the client has on iris similarity scores. Understanding the expected impact of ethnical diversity on iris recognition accuracy is crucial to the future success of this technology as it is deployed in areas where the target population consists of clientele from a range of geographic backgrounds, such as border crossings and immigration check points.

  4. Biometric Fusion Demonstration System Scientific Report

    DTIC Science & Technology

    2004-03-01

    verification and facial recognition , searching watchlist databases comprised of full or partial facial images or voice recordings. Multiple-biometric...17 2.2.1.1 Fingerprint and Facial Recognition ............................... 17...iv DRDC Ottawa CR 2004 – 056 2.2.1.2 Iris Recognition and Facial Recognition ........................ 18

  5. Advanced optical correlation and digital methods for pattern matching—50th anniversary of Vander Lugt matched filter

    NASA Astrophysics Data System (ADS)

    Millán, María S.

    2012-10-01

    On the verge of the 50th anniversary of Vander Lugt’s formulation for pattern matching based on matched filtering and optical correlation, we acknowledge the very intense research activity developed in the field of correlation-based pattern recognition during this period of time. The paper reviews some domains that appeared as emerging fields in the last years of the 20th century and have been developed later on in the 21st century. Such is the case of three-dimensional (3D) object recognition, biometric pattern matching, optical security and hybrid optical-digital processors. 3D object recognition is a challenging case of multidimensional image recognition because of its implications in the recognition of real-world objects independent of their perspective. Biometric recognition is essentially pattern recognition for which the personal identification is based on the authentication of a specific physiological characteristic possessed by the subject (e.g. fingerprint, face, iris, retina, and multifactor combinations). Biometric recognition often appears combined with encryption-decryption processes to secure information. The optical implementations of correlation-based pattern recognition processes still rely on the 4f-correlator, the joint transform correlator, or some of their variants. But the many applications developed in the field have been pushing the systems for a continuous improvement of their architectures and algorithms, thus leading towards merged optical-digital solutions.

  6. Three-dimensional Hessian matrix-based quantitative vascular imaging of rat iris with optical-resolution photoacoustic microscopy in vivo

    NASA Astrophysics Data System (ADS)

    Zhao, Huangxuan; Wang, Guangsong; Lin, Riqiang; Gong, Xiaojing; Song, Liang; Li, Tan; Wang, Wenjia; Zhang, Kunya; Qian, Xiuqing; Zhang, Haixia; Li, Lin; Liu, Zhicheng; Liu, Chengbo

    2018-04-01

    For the diagnosis and evaluation of ophthalmic diseases, imaging and quantitative characterization of vasculature in the iris are very important. The recently developed photoacoustic imaging, which is ultrasensitive in imaging endogenous hemoglobin molecules, provides a highly efficient label-free method for imaging blood vasculature in the iris. However, the development of advanced vascular quantification algorithms is still needed to enable accurate characterization of the underlying vasculature. We have developed a vascular information quantification algorithm by adopting a three-dimensional (3-D) Hessian matrix and applied for processing iris vasculature images obtained with a custom-built optical-resolution photoacoustic imaging system (OR-PAM). For the first time, we demonstrate in vivo 3-D vascular structures of a rat iris with a the label-free imaging method and also accurately extract quantitative vascular information, such as vessel diameter, vascular density, and vascular tortuosity. Our results indicate that the developed algorithm is capable of quantifying the vasculature in the 3-D photoacoustic images of the iris in-vivo, thus enhancing the diagnostic capability of the OR-PAM system for vascular-related ophthalmic diseases in vivo.

  7. Performance and Usage of Biometrics in a Testbed Environment for Tactical Purposes

    DTIC Science & Technology

    2006-12-01

    19 c. Facial Recognition ..................................................................20...geometry, iris recognition, and facial recognition (Layman’s, 2005). Behavioral biometrics can be described not as a physical characteristic, but are...are at: • Correction facilities • Department of Motor Vehicle • Military checkpoints • POW facilities c. Facial Recognition Facial recognition is

  8. Efficient iris texture analysis method based on Gabor ordinal measures

    NASA Astrophysics Data System (ADS)

    Tajouri, Imen; Aydi, Walid; Ghorbel, Ahmed; Masmoudi, Nouri

    2017-07-01

    With the remarkably increasing interest directed to the security dimension, the iris recognition process is considered to stand as one of the most versatile technique critically useful for the biometric identification and authentication process. This is mainly due to every individual's unique iris texture. A modestly conceived efficient approach relevant to the feature extraction process is proposed. In the first place, iris zigzag "collarette" is extracted from the rest of the image by means of the circular Hough transform, as it includes the most significant regions lying in the iris texture. In the second place, the linear Hough transform is used for the eyelids' detection purpose while the median filter is applied for the eyelashes' removal. Then, a special technique combining the richness of Gabor features and the compactness of ordinal measures is implemented for the feature extraction process, so that a discriminative feature representation for every individual can be achieved. Subsequently, the modified Hamming distance is used for the matching process. Indeed, the advanced procedure turns out to be reliable, as compared to some of the state-of-the-art approaches, with a recognition rate of 99.98%, 98.12%, and 95.02% on CASIAV1.0, CASIAV3.0, and IIT Delhi V1 iris databases, respectively.

  9. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  10. [Clinical analysis of real-time iris recognition guided LASIK with femtosecond laser flap creation for myopic astigmatism].

    PubMed

    Jie, Li-ming; Wang, Qian; Zheng, Lin

    2013-08-01

    To assess the safety, efficacy, stability and changes in cylindrical degree and axis after real-time iris recognition guided LASIK with femtosecond laser flap creation for the correction of myopic astigmatism. Retrospective case series. This observational case study comprised 136 patients (249 eyes) with myopic astigmatism in a 6-month trial. Patients were divided into 3 groups according to the pre-operative cylindrical degree: Group 1, -0.75 to -1.25 D, 106 eyes;Group 2, -1.50 to -2.25 D, 89 eyes and Group 3, -2.50 to -5.00 D, 54 eyes. They were also grouped by pre-operative astigmatism axis:Group A, with the rule astigmatism (WTRA), 156 eyes; Group B, against the rule astigmatism (ATRA), 64 eyes;Group C, oblique axis astigmatism, 29 eyes. After femtosecond laser flap created, real-time iris recognized excimer ablation was performed. The naked visual acuity, the best-corrected visual acuity, the degree and axis of astigmatism were analyzed and compared at 1, 3 and 6 months postoperatively. Static iris recognition detected that eye cyclotorsional misalignment was 2.37° ± 2.16°, dynamic iris recognition detected that the intraoperative cyclotorsional misalignment range was 0-4.3°. Six months after operation, the naked visual acuity was 0.5 or better in 100% cases. No eye lost ≥ 1 line of best spectacle-corrected visual acuity (BSCVA). Six months after operation, the naked vision of 227 eyes surpassed the BSCVA, and 87 eyes gained 1 line of BSCVA. The degree of astigmatism decreased from (-1.72 ± 0.77) D (pre-operation) to (-0.29 ± 0.25) D (post-operation). Six months after operation, WTRA from 157 eyes (pre-operation) decreased to 43 eyes (post-operation), ATRA from 63 eyes (pre-operation) decreased to 28 eyes (post-operation), oblique astigmatism increased from 29 eyes to 34 eyes and 144 eyes became non-astigmatism. The real-time iris recognition guided LASIK with femtosecond laser flap creation can compensate deviation from eye cyclotorsion, decrease iatrogenic astigmatism, and provides more precise treatment for the degree and axis of astigmatism .It is an effective and safe procedure for the treatment of myopic astigmatism.

  11. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  12. HyspIRI Low Latency Concept and Benchmarks

    NASA Technical Reports Server (NTRS)

    Mandl, Dan

    2010-01-01

    Topics include HyspIRI low latency data ops concept, HyspIRI data flow, ongoing efforts, experiment with Web Coverage Processing Service (WCPS) approach to injecting new algorithms into SensorWeb, low fidelity HyspIRI IPM testbed, compute cloud testbed, open cloud testbed environment, Global Lambda Integrated Facility (GLIF) and OCC collaboration with Starlight, delay tolerant network (DTN) protocol benchmarking, and EO-1 configuration for preliminary DTN prototype.

  13. Low-level processing for real-time image analysis

    NASA Technical Reports Server (NTRS)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  14. Efficient Iris Recognition Based on Optimal Subfeature Selection and Weighted Subregion Fusion

    PubMed Central

    Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, andMMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity. PMID:24683317

  15. Efficient iris recognition based on optimal subfeature selection and weighted subregion fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; He, Fei; Wang, Hongye; Deng, Ning

    2014-01-01

    In this paper, we propose three discriminative feature selection strategies and weighted subregion matching method to improve the performance of iris recognition system. Firstly, we introduce the process of feature extraction and representation based on scale invariant feature transformation (SIFT) in detail. Secondly, three strategies are described, which are orientation probability distribution function (OPDF) based strategy to delete some redundant feature keypoints, magnitude probability distribution function (MPDF) based strategy to reduce dimensionality of feature element, and compounded strategy combined OPDF and MPDF to further select optimal subfeature. Thirdly, to make matching more effective, this paper proposes a novel matching method based on weighted sub-region matching fusion. Particle swarm optimization is utilized to accelerate achieve different sub-region's weights and then weighted different subregions' matching scores to generate the final decision. The experimental results, on three public and renowned iris databases (CASIA-V3 Interval, Lamp, and MMU-V1), demonstrate that our proposed methods outperform some of the existing methods in terms of correct recognition rate, equal error rate, and computation complexity.

  16. Does Iris Change Over Time?

    PubMed Central

    Mehrotra, Hunny; Vatsa, Mayank; Singh, Richa; Majhi, Banshidhar

    2013-01-01

    Iris as a biometric identifier is assumed to be stable over a period of time. However, some researchers have observed that for long time lapse, the genuine match score distribution shifts towards the impostor score distribution and the performance of iris recognition reduces. The main purpose of this study is to determine if the shift in genuine scores can be attributed to aging or not. The experiments are performed on the two publicly available iris aging databases namely, ND-Iris-Template-Aging-2008–2010 and ND-TimeLapseIris-2012 using a commercial matcher, VeriEye. While existing results are correct about increase in false rejection over time, we observe that it is primarily due to the presence of other covariates such as blur, noise, occlusion, and pupil dilation. This claim is substantiated with quality score comparison of the gallery and probe pairs. PMID:24244305

  17. Does iris change over time?

    PubMed

    Mehrotra, Hunny; Vatsa, Mayank; Singh, Richa; Majhi, Banshidhar

    2013-01-01

    Iris as a biometric identifier is assumed to be stable over a period of time. However, some researchers have observed that for long time lapse, the genuine match score distribution shifts towards the impostor score distribution and the performance of iris recognition reduces. The main purpose of this study is to determine if the shift in genuine scores can be attributed to aging or not. The experiments are performed on the two publicly available iris aging databases namely, ND-Iris-Template-Aging-2008-2010 and ND-TimeLapseIris-2012 using a commercial matcher, VeriEye. While existing results are correct about increase in false rejection over time, we observe that it is primarily due to the presence of other covariates such as blur, noise, occlusion, and pupil dilation. This claim is substantiated with quality score comparison of the gallery and probe pairs.

  18. Ordinal feature selection for iris and palmprint recognition.

    PubMed

    Sun, Zhenan; Wang, Libin; Tan, Tieniu

    2014-09-01

    Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.

  19. Democracy, Social Justice and Education: Feminist Strategies in a Globalising World

    ERIC Educational Resources Information Center

    Enslin, Penny

    2006-01-01

    Recognising the relevance of Iris Marion Young's work to education, this article poses the question: given Iris Young's commitment to both social justice and to recognition of the political and ethical significance of difference, to what extent does her position allow for transnational interventions in education to foster democracy? First, it…

  20. Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.

    PubMed

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.

  1. Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint

    PubMed Central

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065

  2. Rapid Word Recognition as a Measure of Word-Level Automaticity and Its Relation to Other Measures of Reading

    ERIC Educational Resources Information Center

    Frye, Elizabeth M.; Gosky, Ross

    2012-01-01

    The present study investigated the relationship between rapid recognition of individual words (Word Recognition Test) and two measures of contextual reading: (1) grade-level Passage Reading Test (IRI passage) and (2) performance on standardized STAR Reading Test. To establish if time of presentation on the word recognition test was a factor in…

  3. Multi-Patches IRIS Based Person Authentication System Using Particle Swarm Optimization and Fuzzy C-Means Clustering

    NASA Astrophysics Data System (ADS)

    Shekar, B. H.; Bhat, S. S.

    2017-05-01

    Locating the boundary parameters of pupil and iris and segmenting the noise free iris portion are the most challenging phases of an automated iris recognition system. In this paper, we have presented person authentication frame work which uses particle swarm optimization (PSO) to locate iris region and circular hough transform (CHT) to device the boundary parameters. To undermine the effect of the noise presented in the segmented iris region we have divided the candidate region into N patches and used Fuzzy c-means clustering (FCM) to classify the patches into best iris region and not so best iris region (noisy region) based on the probability density function of each patch. Weighted mean Hammimng distance is adopted to find the dissimilarity score between the two candidate irises. We have used Log-Gabor, Riesz and Taylor's series expansion (TSE) filters and combinations of these three for iris feature extraction. To justify the feasibility of the proposed method, we experimented on the three publicly available data sets IITD, MMU v-2 and CASIA v-4 distance.

  4. United States Homeland Security and National Biometric Identification

    DTIC Science & Technology

    2002-04-09

    security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are

  5. Measuring Reading Performance Informally.

    ERIC Educational Resources Information Center

    Powell, William R.

    To improve the accuracy of the informal reading inventory (IRI), a differential set of criteria is necessary for both word recognition and comprehension scores for different levels and reading conditions. In initial evaluation, word recognition scores should reflect only errors of insertions, omissions, mispronunciations, substitiutions, unkown…

  6. Ethics, Identity and Culture: Some Implications of the Moral Philosophy of Iris Murdoch.

    ERIC Educational Resources Information Center

    Weldhen, Margaret

    1986-01-01

    Reviews the work of Iris Murdoch, a moral philosopher who maintains that the work of moral education is the recognition of the phenomenology of inner moral experience and the struggle to express it. Shows how important this view of the inner meaning of moral experience, often expressed as metaphor, is to identity, culture, and education.…

  7. Posterior chamber phakic intraocular lens sizing based on iris pigment layer measurements by anterior segment optical coherence tomography.

    PubMed

    Malyugin, Boris E; Shpak, Alexander A; Pokrovskiy, Dmitry F

    2015-08-01

    To use anterior segment optical coherence tomography (AS-OCT) to evaluate the clinical effectiveness of Implantable Collamer Lens posterior chamber phakic intraocular lens (PC pIOL) sizing based on measurement of the distance from the iris pigment end to the iris pigment end. S. Fyodorov Eye Microsurgery Federal State Institution, Moscow, Russia. Evaluation of diagnostic test or technology. Stage 1 was a prospective study. The sulcus-to-sulcus (STS) distance was measured using ultrasound biomicroscopy (UBM) (Vumax 2), and the distance from iris pigment end to iris pigment end was assessed using a proposed AS-OCT algorithm. Part 2 used retrospective data from patients after implantation of a PC pIOL with the size selected according to AS-OCT (Visante) measurements of the distance from iris pigment end to iris pigment end. The PC pIOL vault was measured by AS-OCT, and adverse events were assessed. Stage 1 comprised 32 eyes of 32 myopic patients (mean age 28.4 years ± 6.3 [SD]; mean spherical equivalent [SE] -13.11 ± 4.28 diopters [D]). Stage 2 comprised 29 eyes of 16 patients (mean age 27.7 ± 4.7 years; mean SE -16.55 ± 3.65 D). The mean STS distance (12.35 ± 0.47 mm) was similar to the mean distance from iris pigment end to iris pigment end distance (examiner 1: 12.36 ± 0.51 mm; examiner 2: 12.37 ± 0.53 mm). The PC pIOL sized using the new AS-OCT algorithm had a mean vault of 0.53 ± 0.18 mm and did not produce adverse events during the 12-month follow-up. In 16 of 29 eyes, the PC pIOL vault was within an optimum interval (0.35 to 0.70 mm). The new measurement algorithm can be effectively used for PC pIOL sizing. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. International Roughness Index (IRI) measurement using Hilbert-Huang transform

    NASA Astrophysics Data System (ADS)

    Zhang, Wenjin; Wang, Ming L.

    2018-03-01

    International Roughness Index (IRI) is an important metric to measure condition of roadways. This index is usually used to justify the maintenance priority and scheduling for roadways. Various inspection methods and algorithms are used to assess this index through the use of road profiles. This study proposes to calculate IRI values using Hilbert-Huang Transform (HHT) algorithm. In particular, road profile data is provided using surface radar attached to a vehicle driving at highway speed. Hilbert-Huang transform (HHT) is used in this study because of its superior properties for nonstationary and nonlinear data. Empirical mode decomposition (EMD) processes the raw data into a set of intrinsic mode functions (IMFs), representing various dominating frequencies. These various frequencies represent noises from the body of the vehicle, sensor location, and the excitation induced by nature frequency of the vehicle, etc. IRI calculation can be achieved by eliminating noises that are not associated with the road profile including vehicle inertia effect. The resulting IRI values are compared favorably to the field IRI values, where the filtered IMFs captures the most characteristics of road profile while eliminating noises from the vehicle and the vehicle inertia effect. Therefore, HHT is an effect method for road profile analysis and for IRI measurement. Furthermore, the application of HHT method has the potential to eliminate the use of accelerometers attached to the vehicle as part of the displacement measurement used to offset the inertia effect.

  9. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  10. An inverse method to determine the mechanical properties of the iris in vivo

    PubMed Central

    2014-01-01

    Background Understanding the mechanical properties of the iris can help to have an insight into the eye diseases with abnormalities of the iris morphology. Material parameters of the iris were simply calculated relying on the ex vivo experiment. However, the mechanical response of the iris in vivo is different from that ex vivo, therefore, a method was put forward to determine the material parameters of the iris using the optimization method in combination with the finite element method based on the in vivo experiment. Material and methods Ocular hypertension was induced by rapid perfusion to the anterior chamber, during perfusion intraocular pressures in the anterior and posterior chamber were record by sensors, images of the anterior segment were captured by the ultrasonic system. The displacement of the characteristic points on the surface of the iris was calculated. A finite element model of the anterior chamber was developed using the ultrasonic image before perfusion, the multi-island genetic algorithm was employed to determine the material parameters of the iris by minimizing the difference between the finite element simulation and the experimental measurements. Results Material parameters of the iris in vivo were identified as the iris was taken as a nearly incompressible second-order Ogden solid. Values of the parameters μ1, α1, μ2 and α2 were 0.0861 ± 0.0080 MPa, 54.2546 ± 12.7180, 0.0754 ± 0.0200 MPa, and 48.0716 ± 15.7796 respectively. The stability of the inverse finite element method was verified, the sensitivity of the model parameters was investigated. Conclusion Material properties of the iris in vivo could be determined using the multi-island genetic algorithm coupled with the finite element method based on the experiment. PMID:24886660

  11. Coastal and Inland Aquatic Data Products for the Hyperspectral Infrared Imager (HyspIRI)

    NASA Technical Reports Server (NTRS)

    Abelev, Andrei; Babin, Marcel; Bachmann, Charles; Bell, Thomas; Brando, Vittorio; Byrd, Kristin; Dekker , Arnold; Devred, Emmanuel; Forget, Marie-Helene; Goodman, James; hide

    2015-01-01

    The HyspIRI Aquatic Studies Group (HASG) has developed a conceptual list of data products for the HyspIRI mission to support aquatic remote sensing of coastal and inland waters. These data products were based on mission capabilities, characteristics, and expected performance. The topic of coastal and inland water remote sensing is very broad. Thus, this report focuses on aquatic data products to keep the scope of this document manageable. The HyspIRI mission requirements already include the global production of surface reflectance and temperature. Atmospheric correction and surface temperature algorithms, which are critical to aquatic remote sensing, are covered in other mission documents. Hence, these algorithms and their products were not evaluated in this report. In addition, terrestrial products (e.g., land use land cover, dune vegetation, and beach replenishment) were not considered. It is recognized that coastal studies are inherently interdisciplinary across aquatic and terrestrial disciplines. However, products supporting the latter are expected to already be evaluated by other components of the mission. The coastal and inland water data products that were identified by the HASG, covered six major environmental and ecological areas for scientific research and applications: wetlands, shoreline processes, the water surface, the water column, bathymetry and benthic cover types. Accordingly, each candidate product was evaluated for feasibility based on the HyspIRI mission characteristics and whether it was unique and relevant to the HyspIRI science objectives.

  12. The International Reference Ionosphere: Rawer's IRI and its status today

    NASA Astrophysics Data System (ADS)

    Bilitza, D.

    2014-11-01

    When the Committee on Space Research (COSPAR) initiated the International Reference Ionosphere (IRI) project in 1968 it wisely selected K. Rawer as its first Chairperson. With a solid footing and good contacts in both the ground-based and space-based ionospheric communities he was ideally suited to pull together colleagues and data from both communities to help build the first version of the IRI. He assembled a team of 20+ international ionospheric experts in the IRI Working Group and chaired and directed the group from 1968 to 1984. The working group has now grown to 63 members and the IRI model has undergone many revisions as new data became available and new modeling techniques were applied. This paper was presented during a special session of the Kleinheubach Tagung 2013 in honor of K. Rawer's 100th birthday. It will review the current status of the IRI model and project and the international recognition it has achieved. It is quite fitting that this year we not only celebrate K. Rawer's 100th birthday but also the exciting news that his favorite science endeavor, IRI, has been internationally recognized as an ISO (International Standardization Organization) standard. The IRI homepage is at http://irimodel.org.

  13. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  14. IRIS family of IRCCD thermal imagers integrating long-life cryogenic coolers, sophisticated algorithms for image enhancement, and hot points detection

    NASA Astrophysics Data System (ADS)

    Dupuy, Pascal; Harter, Jean

    1995-09-01

    Iris is a modular infrared thermal image developed by SAGEM since 1988, based on a 288 by 4 IRCCD detector. The first section of the presentation gives a description of the different modules of the IRIS thermal imager and their evolution in recent years. The second section covers the description of the major evolution, namely the integrated detector cooler assembly (IDCA), using a SOFRADIR 288 by 4 detector and a SAGEM microcooler, now integrated in the IRIS thermal imagers. The third section gives the description of two functions integrated in the IRIS thermal imager: (1) image enhancement, using a digital convolution filter, and (2) automatic hot points detection and tracking, offering an assistance to surveillance and automatic detection. The last section presents several programs for navy, air forces, and land applications for which IRIS has already been selected and achieved.

  15. Characterization of iris pattern stretches and application to the measurement of roll axis eye movements.

    PubMed

    Nishiyama, Junpei; Hashimoto, Tsutomu; Sakashita, Yusuke; Fujiyoshi, Hironobu; Hirata, Yutaka

    2008-01-01

    Eye movements are utilized in many scientific studies as a probe that reflects the neural representation of 3 dimensional extrapersonal space. This study proposes a method to accurately measure the roll component of eye movements under the conditions in which the pupil diameter changes. Generally, the iris pattern matching between a reference and a test iris image is performed to estimate roll angle of the test image. However, iris patterns are subject to change when the pupil size changes, thus resulting in less accurate roll angle estimation if the pupil sizes in the test and reference images are different. We characterized non-uniform iris pattern contraction/expansion caused by pupil dilation/constriction, and developed an algorithm to convert an iris pattern with an arbitrary pupil size into that with the same pupil size as the reference iris pattern. It was demonstrated that the proposed method improved the accuracy of the measurement of roll eye movement by up to 76.9%.

  16. A Statistical Analysis of IrisCode and Its Security Implications.

    PubMed

    Kong, Adams Wai-Kin

    2015-03-01

    IrisCode has been used to gather iris data for 430 million people. Because of the huge impact of IrisCode, it is vital that it is completely understood. This paper first studies the relationship between bit probabilities and a mean of iris images (The mean of iris images is defined as the average of independent iris images.) and then uses the Chi-square statistic, the correlation coefficient and a resampling algorithm to detect statistical dependence between bits. The results show that the statistical dependence forms a graph with a sparse and structural adjacency matrix. A comparison of this graph with a graph whose edges are defined by the inner product of the Gabor filters that produce IrisCodes shows that partial statistical dependence is induced by the filters and propagates through the graph. Using this statistical information, the security risk associated with two patented template protection schemes that have been deployed in commercial systems for producing application-specific IrisCodes is analyzed. To retain high identification speed, they use the same key to lock all IrisCodes in a database. The belief has been that if the key is not compromised, the IrisCodes are secure. This study shows that even without the key, application-specific IrisCodes can be unlocked and that the key can be obtained through the statistical dependence detected.

  17. Efficient quantitative assessment of facial paralysis using iris segmentation and active contour-based key points detection with hybrid classifier.

    PubMed

    Barbosa, Jocelyn; Lee, Kyubum; Lee, Sunwon; Lodhi, Bilal; Cho, Jae-Gu; Seo, Woo-Keun; Kang, Jaewoo

    2016-03-12

    Facial palsy or paralysis (FP) is a symptom that loses voluntary muscles movement in one side of the human face, which could be very devastating in the part of the patients. Traditional methods are solely dependent to clinician's judgment and therefore time consuming and subjective in nature. Hence, a quantitative assessment system becomes apparently invaluable for physicians to begin the rehabilitation process; and to produce a reliable and robust method is challenging and still underway. We introduce a novel approach for a quantitative assessment of facial paralysis that tackles classification problem for FP type and degree of severity. Specifically, a novel method of quantitative assessment is presented: an algorithm that extracts the human iris and detects facial landmarks; and a hybrid approach combining the rule-based and machine learning algorithm to analyze and prognosticate facial paralysis using the captured images. A method combining the optimized Daugman's algorithm and Localized Active Contour (LAC) model is proposed to efficiently extract the iris and facial landmark or key points. To improve the performance of LAC, appropriate parameters of initial evolving curve for facial features' segmentation are automatically selected. The symmetry score is measured by the ratio between features extracted from the two sides of the face. Hybrid classifiers (i.e. rule-based with regularized logistic regression) were employed for discriminating healthy and unhealthy subjects, FP type classification, and for facial paralysis grading based on House-Brackmann (H-B) scale. Quantitative analysis was performed to evaluate the performance of the proposed approach. Experiments show that the proposed method demonstrates its efficiency. Facial movement feature extraction on facial images based on iris segmentation and LAC-based key point detection along with a hybrid classifier provides a more efficient way of addressing classification problem on facial palsy type and degree of severity. Combining iris segmentation and key point-based method has several merits that are essential for our real application. Aside from the facial key points, iris segmentation provides significant contribution as it describes the changes of the iris exposure while performing some facial expressions. It reveals the significant difference between the healthy side and the severe palsy side when raising eyebrows with both eyes directed upward, and can model the typical changes in the iris region.

  18. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors

    PubMed Central

    Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung

    2015-01-01

    Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands. PMID:26184214

  19. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors.

    PubMed

    Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung

    2015-07-13

    Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.

  20. Reliability of Iris Recognition as a Means of Identity Verification and Future Impact on Transportation Worker Identification Credential

    DTIC Science & Technology

    2008-03-01

    80 5. Time Frame of the Experiment...81 6. Eyeglasses ............................................................................................81 D. OBSERVED RESULTS...97 2. Eyeglasses are a Factor

  1. Association of iris surface features with iris parameters assessed by swept-source optical coherence tomography in Asian eyes.

    PubMed

    Tun, Tin A; Chua, Jacqueline; Shi, Yuan; Sidhartha, Elizabeth; Thakku, Sri Gowtham; Shei, William; Tan, Marcus Chiang Lee; Quah, Joanne Hui Min; Aung, Tin; Cheng, Ching-Yu

    2016-12-01

    To characterise the association of iris surface features (crypts, furrows and colour) with iris volume and curvature assessed by swept-source optical coherence tomography (SSOCT) in Asian eyes. Iris crypts (by number and size) and furrows (by number and circumferential extent) were graded from iris photographs. Iris colour was measured by a customised algorithm written on MATLAB (MathWorks, Natick, Massachusetts, USA). The iris was imaged by SSOCT (SS-1000, CASIA, Tomey, Nagoya, Japan). The associations of surface features with iris parameters were analysed using a generalised estimating equation. A total of 1704 subjects (3297 eyes) were included in the analysis. The majority was Chinese (86.4%), and 63.2% were females, and their mean age (±SD) was 61.4±6.6 years. After adjusting for age, sex, ethnicity, pupil size and corneal arcus, higher iris crypt grade was independently associated with smaller iris volume (β=-0.54, p<0.001), whereas darker irides and higher iris furrow grade were associated with larger iris volume (β=-0.041, p<0.001) and (β=0.233, p<0.001), respectively. Lighter coloured irides with more crypts and/or more furrows were also associated with less convexity (crypts: β=-0.003, p=0.03; furrows: β=-0.004, p=0.007; and colour: β=-0.001, p=0.005). Iris surface features were highly correlated with iris volume and curvature. Irides with more crypts have a smaller volume; and darker irides with more furrows have a larger volume. Lighter irides with more crypts and/or furrows have less convexity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Dependency of Optimal Parameters of the IRIS Template on Image Quality and Border Detection Error

    NASA Astrophysics Data System (ADS)

    Matveev, I. A.; Novik, V. P.

    2017-05-01

    Generation of a template containing spatial-frequency features of iris is an important stage of identification. The template is obtained by a wavelet transform in an image region specified by iris borders. One of the main characteristics of the identification system is the value of recognition error, equal error rate (EER) is used as criterion here. The optimal values (in sense of minimizing the EER) of wavelet transform parameters depend on many factors: image quality, sharpness, size of characteristic objects, etc. It is hard to isolate these factors and their influences. The work presents an attempt to study an influence of following factors to EER: iris segmentation precision, defocus level, noise level. Several public domain iris image databases were involved in experiments. The images were subjected to modelled distortions of said types. The dependencies of wavelet parameter and EER values from the distortion levels were build. It is observed that the increase of the segmentation error and image noise leads to the increase of the optimal wavelength of the wavelets, whereas the increase of defocus level leads to decreasing of this value.

  3. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    NASA Astrophysics Data System (ADS)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  4. Real-time image-guided nasogastric feeding tube placement: A case series using Kangaroo with IRIS Technology in an ICU.

    PubMed

    Mizzi, Anna; Cozzi, Silvano; Beretta, Luigi; Greco, Massimiliano; Braga, Marco

    2017-05-01

    Pulmonary misplacement during the blind insertion of enteral feeding tubes is frequent, particularly in ventilated and neurologically impaired patients. This is probably the first clinical study using the Kangaroo Feeding Tube with IRIS technology (IRIS) which incorporates a camera designed to provide anatomic landmark visualization during insertion. The study aim was to evaluate IRIS performance during bedside gastric placement. This is the first prospective study to collect data on the use of IRIS. Twenty consecutive unconscious patients requiring enteral nutrition were recruited at a single center. IRIS placement was considered complete when a clear image of the gastric mucosa appeared. Correct placement was confirmed using a contrast-enhanced abdominal X-ray. To evaluate the device performance over time, the camera was activated every other day up to 17 d postplacement. In 7 (35%) patients, the trachea was initially visualized, requiring a second placement attempt with the same tube. The IRIS camera allowed recognition of the gastric mucosa in 18 (90%) patients. The esophagogastric junction was identified in one patient, while in a second patient the quality of visualization was poor. Contrast-enhanced X-ray confirmed the gastric placement of IRIS in all patients. IRIS allowed identification of gastric mucosa in 14 (70%) patients 3 d after placement. Performance progressively declined with time (P = 0.006, chi-square for trend). IRIS placement could have spared X-ray confirmation in almost all patients and prevented misplacement into the airway in about one third. Visualization quality needs to be improved, particularly after the first week. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Extended depth of field in an intrinsically wavefront-encoded biometric iris camera

    NASA Astrophysics Data System (ADS)

    Bergkoetter, Matthew D.; Bentley, Julie L.

    2014-12-01

    This work describes a design process which greatly increases the depth of field of a simple three-element lens system intended for biometric iris recognition. The system is optimized to produce a point spread function which is insensitive to defocus, so that recorded images may be deconvolved without knowledge of the exact object distance. This is essentially a variation on the technique of wavefront encoding, however the desired encoding effect is achieved by aberrations intrinsic to the lens system itself, without the need for a pupil phase mask.

  6. Who Goes There?

    ERIC Educational Resources Information Center

    Vail, Kathleen

    1995-01-01

    Biometrics (hand geometry, iris and retina scanners, voice and facial recognition, signature dynamics, facial thermography, and fingerprint readers) identifies people based on physical characteristics. Administrators worried about kidnapping, vandalism, theft, and violent intruders might welcome these security measures when they become more…

  7. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  8. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    PubMed

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  9. Assessing for domestic violence in sexual health environments: a qualitative study.

    PubMed

    Horwood, Jeremy; Morden, Andrew; Bailey, Jayne E; Pathak, Neha; Feder, Gene

    2018-03-01

    Domestic violence and abuse (DVA) is a major clinical challenge and public health issue. Sexual health services are an important potential site of DVA intervention. The Assessing for Domestic Violence in Sexual Health Environments (ADViSE) intervention aimed to improve identification and management of DVA in sexual healthcare settings and is a modified version of the Identification and Referral to Improve Safety (IRIS) general practice programme. Our qualitative evaluation aimed to explore the experiences of staff participating in an IRIS ADViSE pilot. Interviews were conducted with 17 sexual health clinic staff and DVA advocate workers. Interviews were audio recorded, transcribed, anonymised and analysed thematically. Staff prioritised enquiring about DVA and tailored their style of enquiry to the perceived characteristics of patients, current workload and individual clinical judgements. Responding to disclosures of abuse was divided between perceived low-risk cases (with quick onwards referral) and high-risk cases (requiring deployment of institution safeguarding procedures), which were viewed as time consuming and could create tensions with patients. Ongoing training and feedback, commissioner recognition, adequate service-level agreements and reimbursements are required to ensure sustainability and wider implementation of IRIS ADViSE. Challenges of delivering and sustaining IRIS ADViSE included the varied styles of enquiry, as well as tensions and additional time pressure arising from disclosure of abuse. These can be overcome by modifying initial training, providing regular updates and stronger recognition (and resources) at policy and commissioning levels. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  10. Analysis of physical layer performance of data center with optical wavelength switches based on advanced modulation formats

    NASA Astrophysics Data System (ADS)

    Ahmad, Iftikhar; Chughtai, Mohsan Niaz

    2018-05-01

    In this paper the IRIS (Integrated Router Interconnected spectrally), an optical domain architecture for datacenter network is analyzed. The IRIS integrated with advanced modulation formats (M-QAM) and coherent optical receiver is analyzed. The channel impairments are compensated using the DSP algorithms following the coherent receiver. The proposed scheme allows N2 multiplexed wavelengths for N×N size. The performance of the N×N-IRIS switch with and without wavelength conversion is analyzed for different Baud rates over M-QAM modulation formats. The performance of the system is analyzed in terms of bit error rate (BER) vs OSNR curves.

  11. Face Recognition with the Karhunen-Loeve Transform

    DTIC Science & Technology

    1991-12-01

    anthropometry community? 1-2 Methodology As part of this thesis, face recognition software is developed on the Silicon Graphics 4D Personal Iris...the anthropometry community. Standards The most important performance criteria is classification accuracy which is the per- centage of correct...demonstrated by Tarr (24). Reconstructed Output Image yl y2 ... y64 16 hidden layer units xl x2 ... x64 Input 64 by 64 pixel Image Figure 2.6. After the

  12. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  13. Study of the similarity function in Indexing-First-One hashing

    NASA Astrophysics Data System (ADS)

    Lai, Y.-L.; Jin, Z.; Goi, B.-M.; Chai, T.-Y.

    2017-06-01

    The recent proposed Indexing-First-One (IFO) hashing is a latest technique that is particularly adopted for eye iris template protection, i.e. IrisCode. However, IFO employs the measure of Jaccard Similarity (JS) initiated from Min-hashing has yet been adequately discussed. In this paper, we explore the nature of JS in binary domain and further propose a mathematical formulation to generalize the usage of JS, which is subsequently verified by using CASIA v3-Interval iris database. Our study reveals that JS applied in IFO hashing is a generalized version in measure two input objects with respect to Min-Hashing where the coefficient of JS is equal to one. With this understanding, IFO hashing can propagate the useful properties of Min-hashing, i.e. similarity preservation, thus favorable for similarity searching or recognition in binary space.

  14. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    NASA Astrophysics Data System (ADS)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  15. Extending the depth of field with chromatic aberration for dual-wavelength iris imaging.

    PubMed

    Fitzgerald, Niamh M; Dainty, Christopher; Goncharov, Alexander V

    2017-12-11

    We propose a method of extending the depth of field to twice that achievable by conventional lenses for the purpose of a low cost iris recognition front-facing camera in mobile phones. By introducing intrinsic primary chromatic aberration in the lens, the depth of field is doubled by means of dual wavelength illumination. The lens parameters (radius of curvature, optical power) can be found analytically by using paraxial raytracing. The effective range of distances covered increases with dispersion of the glass chosen and with larger distance for the near object point.

  16. Convolution Comparison Pattern: An Efficient Local Image Descriptor for Fingerprint Liveness Detection

    PubMed Central

    Gottschlich, Carsten

    2016-01-01

    We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544

  17. Digital signal processing algorithms for automatic voice recognition

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1987-01-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  18. Seasonal and Inter-Annual Patterns of Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2015-12-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, to track energy flow through ecosystems, and to identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species, evaluating iron stress of phytoplankton, and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. As a consequence, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. However, the coastal marine environment has special atmospheric correction needs due to error that may be introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals for use in estimating chlorophyll (OC3 algorithm) and phytoplankton functional type (PHYDOTax algorithm) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons - upwelling and the warm, stratified oceanic period for 2013 and 2014. These two periods are dominated by either diatom blooms (occasionally toxic) or red tides. Results presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during these two seasons.

  19. IRIS Mariner 9 Data Revisited. 1; An Instrumental Effect

    NASA Technical Reports Server (NTRS)

    Formisano, V.; Grassi, D.; Piccioni, G.; Pearl, John; Bjoraker, G.; Conrath, B.; Hanel, R.

    1999-01-01

    Small spurious features are present in data from the Mariner 9 Infrared Interferometer Spectrometer (IRIS). These represent a low amplitude replication of the spectrum with a doubled wavenumber scale. This replication arises principally from an internal reflection of the interferogram at the input window. An algorithm is provided to correct for the effect, which is at the 2% level. We believe that the small error in the uncorrected spectra does not materially affect previous results; however, it may be significant for some future studies at short wavelengths. The IRIS spectra are also affected by a coding error in the original calibration that results in only positive radiances. This reduces the effectiveness of averaging spectra to improve the signal to noise ratio at small signal levels.

  20. Eye contrast polarity is critical for face recognition by infants.

    PubMed

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Stress reaction process-based hierarchical recognition algorithm for continuous intrusion events in optical fiber prewarning system

    NASA Astrophysics Data System (ADS)

    Qu, Hongquan; Yuan, Shijiao; Wang, Yanping; Yang, Dan

    2018-04-01

    To improve the recognition performance of optical fiber prewarning system (OFPS), this study proposed a hierarchical recognition algorithm (HRA). Compared with traditional methods, which employ only a complex algorithm that includes multiple extracted features and complex classifiers to increase the recognition rate with a considerable decrease in recognition speed, HRA takes advantage of the continuity of intrusion events, thereby creating a staged recognition flow inspired by stress reaction. HRA is expected to achieve high-level recognition accuracy with less time consumption. First, this work analyzed the continuity of intrusion events and then presented the algorithm based on the mechanism of stress reaction. Finally, it verified the time consumption through theoretical analysis and experiments, and the recognition accuracy was obtained through experiments. Experiment results show that the processing speed of HRA is 3.3 times faster than that of a traditional complicated algorithm and has a similar recognition rate of 98%. The study is of great significance to fast intrusion event recognition in OFPS.

  2. Reliability and Validity of Curriculum-Based Informal Reading Inventories.

    ERIC Educational Resources Information Center

    Fuchs, Lynn; And Others

    A study was conducted to explore the reliability and validity of three prominent procedures used in informal reading inventories (IRIs): (1) choosing a 95% word recognition accuracy standard for determining student instructional level, (2) arbitrarily selecting a passage to represent the difficulty level of a basal reader, and (3) employing…

  3. Monitoring Reading Behavior: Criteria for Performance.

    ERIC Educational Resources Information Center

    Powell, William R.

    Effective use of the informal reading inventory (IRI) depends upon the criteria used in determining the functional reading levels and more specifically the word recognition criteria employed in describing acceptable limits of oral reading behavior. The author of this paper looks at the diverse sets of criteria commonly used, the problems…

  4. Alexithymia and Mood: Recognition of Emotion in Self and Others.

    PubMed

    Lyvers, Michael; Kohlsdorf, Susan M; Edwards, Mark S; Thorberg, Fred Arne

    2017-01-01

    The present study explored relationships between alexithymia-a trait characterized by difficulties identifying and describing feelings and an external thinking style-and negative moods, negative mood regulation expectancies, facial recognition of emotions, emotional empathy, and alcohol consumption. The sample consisted of 102 university (primarily psychology) students (13 men, 89 women) aged 18 to 50 years (M = 22.18 years). Participants completed the Toronto Alexithymia Scale (TAS-20), Negative Mood Regulation Scale (NMRS), Depression Anxiety Stress Scales (DASS-21), Reading the Mind in the Eyes Test (RMET), Interpersonal Reactivity Index (IRI), and Alcohol Use Disorders Identification Test (AUDIT). Results were consistent with previous findings of positive relationships of TAS-20 alexithymia scores with both alcohol use (AUDIT) and negative moods (DASS-21) and a negative relationship with emotional self-regulation as indexed by NMRS. Predicted negative associations of both overall TAS-20 alexithymia scores and the externally oriented thinking (EOT) subscale of the TAS-20 with both RMET facial recognition of emotions and the empathic concern (EC) subscale of the IRI were supported. The mood self-regulation index NMRS fully mediated the relationship between alexithymia and negative moods. Hierarchical linear regressions revealed that, after other relevant variables were controlled for, the EOT subscale of the TAS-20 predicted RMET and EC. The concrete thinking or EDT facet of alexithymia thus appears to be associated with diminished facial recognition of emotions and reduced emotional empathy. The negative moods associated with alexithymia appear to be linked to subjective difficulties in self-regulation of emotions.

  5. Recognition of strong earthquake-prone areas with a single learning class

    NASA Astrophysics Data System (ADS)

    Gvishiani, A. D.; Agayan, S. M.; Dzeboev, B. A.; Belov, I. O.

    2017-05-01

    This article presents a new Barrier recognition algorithm with learning, designed for recognition of earthquake-prone areas. In comparison to the Crust (Kora) algorithm, used by the classical EPA approach, the Barrier algorithm proceeds with learning just on one "pure" high-seismic class. The new algorithm operates in the space of absolute values of the geological-geophysical parameters of the objects. The algorithm is used for recognition of earthquake-prone areas with M ≥ 6.0 in the Caucasus region. Comparative analysis of the Crust and Barrier algorithms justifies their productive coherence.

  6. Immune Reconstitution Reactions in Human Immunodeficiency Virus–Negative Patients

    PubMed Central

    Scharschmidt, Tiffany C.; Amerson, Erin H.; Rosenberg, Oren S.; Jacobs, Richard A.; McCalmont, Timothy H.; Shinkai, Kanade

    2013-01-01

    Background Immune reconstitution inflammatory syndrome (IRIS) is a phenomenon initially described in patients with human immunodeficiency virus. Upon initiation of combination antiretroviral therapy, recovery of cellular immunity triggers inflammation to a preexisting infection or antigen that causes paradoxical worsening of clinical disease. A similar phenomenon can occur in human immunodeficiency virus–negative patients, including pregnant women, neutropenic hosts, solidorgan or stem cell transplant recipients, and patients receiving tumor necrosis factor inhibitors. Observations We report a case of leprosy unmasking and downgrading reaction after stem cell transplantation that highlights some of the challenges inherent to the diagnosis of IRIS, especially in patients without human immunodeficiency virus infection, as well as review the spectrum of previously reported cases of IRIS reactions in this population. Conclusions The mechanism of immune reconstitution reactions is complex and variable, depending on the underlying antigen and the mechanism of immunosuppression or shift in immune status. Use of the term IRIS can aid our recognition of an important phenomenon that occurs in the setting of immunosuppression or shifts in immunity but should not deter us from thinking critically about the distinct processes that underlie this heterogeneous group of conditions. PMID:23324760

  7. Security enhanced BioEncoding for protecting iris codes

    NASA Astrophysics Data System (ADS)

    Ouda, Osama; Tsumura, Norimichi; Nakaguchi, Toshiya

    2011-06-01

    Improving the security of biometric template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a recently proposed template protection scheme, based on the concept of cancelable biometrics, for protecting biometric templates represented as binary strings such as iris codes. The main advantage of BioEncoding over other template protection schemes is that it does not require user-specific keys and/or tokens during verification. Besides, it satisfies all the requirements of the cancelable biometrics construct without deteriorating the matching accuracy. However, although it has been shown that BioEncoding is secure enough against simple brute-force search attacks, the security of BioEncoded templates against more smart attacks, such as record multiplicity attacks, has not been sufficiently investigated. In this paper, a rigorous security analysis of BioEncoding is presented. Firstly, resistance of BioEncoded templates against brute-force attacks is revisited thoroughly. Secondly, we show that although the cancelable transformation employed in BioEncoding might be non-invertible for a single protected template, the original iris code could be inverted by correlating several templates used in different applications but created from the same iris. Accordingly, we propose an important modification to the BioEncoding transformation process in order to hinder attackers from exploiting this type of attacks. The effectiveness of adopting the suggested modification is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed approach and show that it preserves the matching accuracy of the unprotected iris recognition system.

  8. A Survey on Sentiment Classification in Face Recognition

    NASA Astrophysics Data System (ADS)

    Qian, Jingyu

    2018-01-01

    Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.

  9. Identifying injection drug use and estimating population size of people who inject drugs using healthcare administrative datasets.

    PubMed

    Janjua, Naveed Zafar; Islam, Nazrul; Kuo, Margot; Yu, Amanda; Wong, Stanley; Butt, Zahid A; Gilbert, Mark; Buxton, Jane; Chapinal, Nuria; Samji, Hasina; Chong, Mei; Alvarez, Maria; Wong, Jason; Tyndall, Mark W; Krajden, Mel

    2018-05-01

    Large linked healthcare administrative datasets could be used to monitor programs providing prevention and treatment services to people who inject drugs (PWID). However, diagnostic codes in administrative datasets do not differentiate non-injection from injection drug use (IDU). We validated algorithms based on diagnostic codes and prescription records representing IDU in administrative datasets against interview-based IDU data. The British Columbia Hepatitis Testers Cohort (BC-HTC) includes ∼1.7 million individuals tested for HCV/HIV or reported HBV/HCV/HIV/tuberculosis cases in BC from 1990 to 2015, linked to administrative datasets including physician visit, hospitalization and prescription drug records. IDU, assessed through interviews as part of enhanced surveillance at the time of HIV or HCV/HBV diagnosis from a subset of cases included in the BC-HTC (n = 6559), was used as the gold standard. ICD-9/ICD-10 codes for IDU and injecting-related infections (IRI) were grouped with records of opioid substitution therapy (OST) into multiple IDU algorithms in administrative datasets. We assessed the performance of IDU algorithms through calculation of sensitivity, specificity, positive predictive, and negative predictive values. Sensitivity was highest (90-94%), and specificity was lowest (42-73%) for algorithms based either on IDU or IRI and drug misuse codes. Algorithms requiring both drug misuse and IRI had lower sensitivity (57-60%) and higher specificity (90-92%). An optimal sensitivity and specificity combination was found with two medical visits or a single hospitalization for injectable drugs with (83%/82%) and without OST (78%/83%), respectively. Based on algorithms that included two medical visits, a single hospitalization or OST records, there were 41,358 (1.2% of 11-65 years individuals in BC) recent PWID in BC based on health encounters during 3- year period (2013-2015). Algorithms for identifying PWID using diagnostic codes in linked administrative data could be used for tracking the progress of programing aimed at PWID. With population-based datasets, this tool can be used to inform much needed estimates of PWID population size. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Seasonal and Inter-Annual Patterns of Chlorophyll and Phytoplankton Community Structure in Monterey Bay, CA Derived from AVIRIS Data During the 2013-2015 HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Thompson, D. R.; Kudela, R. M.; Negrey, K.; Guild, L. S.; Gao, B. C.; Green, R. O.; Torres-Perez, J. L.

    2016-02-01

    There is a need in the ocean color community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand ocean biodiversity, track energy flow through ecosystems, and identify and monitor for harmful algal blooms. Imaging spectrometer measurements enable the use of sophisticated spectroscopic algorithms for applications such as differentiating among coral species and discriminating phytoplankton taxa. These advanced algorithms rely on the fine scale, subtle spectral shape of the atmospherically corrected remote sensing reflectance (Rrs) spectrum of the ocean surface. Consequently, these algorithms are sensitive to inaccuracies in the retrieved Rrs spectrum that may be related to the presence of nearby clouds, inadequate sensor calibration, low sensor signal-to-noise ratio, glint correction, and atmospheric correction. For the HyspIRI Airborne Campaign, flight planning considered optimal weather conditions to avoid flights with significant cloud/fog cover. Although best suited for terrestrial targets, the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) has enough signal for some coastal chlorophyll algorithms and meets sufficient calibration requirements for most channels. The coastal marine environment has special atmospheric correction needs due to error introduced by aerosols and terrestrially sourced atmospheric dust and riverine sediment plumes. For this HyspIRI campaign, careful attention has been given to the correction of AVIRIS imagery of the Monterey Bay to optimize ocean Rrs retrievals to estimate chlorophyll (OC3) and phytoplankton functional type (PHYDOTax) data products. This new correction method has been applied to several image collection dates during two oceanographic seasons in 2013 and 2014. These two periods are dominated by either diatom blooms or red tides. Results to be presented include chlorophyll and phytoplankton community structure and in-water validation data for these dates during the two seasons.

  11. The Validity of the Instructional Reading Level.

    ERIC Educational Resources Information Center

    Powell, William R.

    Presented is a critical inquiry about the product of the informal reading inventory (IRI) and about some of the elements used in the process of determining that product. Recent developments on this topic are briefly reviewed. Questions are raised concerning what is a suitable criterion level for word recognition. The original criterion of 95…

  12. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  13. Automatic Event Detection in Search for Inter-Moss Loops in IRIS Si IV Slit-Jaw Images

    NASA Technical Reports Server (NTRS)

    Fayock, Brian; Winebarger, Amy R.; De Pontieu, Bart

    2015-01-01

    The high-resolution capabilities of the Interface Region Imaging Spectrometer (IRIS) mission have allowed the exploration of the finer details of the solar magnetic structure from the chromosphere to the lower corona that have previously been unresolved. Of particular interest to us are the relatively short-lived, low-lying magnetic loops that have foot points in neighboring moss regions. These inter-moss loops have also appeared in several AIA pass bands, which are generally associated with temperatures that are at least an order of magnitude higher than that of the Si IV emission seen in the 1400 angstrom pass band of IRIS. While the emission lines seen in these pass bands can be associated with a range of temperatures, the simultaneous appearance of these loops in IRIS 1400 and AIA 171, 193, and 211 suggest that they are not in ionization equilibrium. To study these structures in detail, we have developed a series of algorithms to automatically detect signal brightening or events on a pixel-by-pixel basis and group them together as structures for each of the above data sets. These algorithms have successfully picked out all activity fitting certain adjustable criteria. The resulting groups of events are then statistically analyzed to determine which characteristics can be used to distinguish the inter-moss loops from all other structures. While a few characteristic histograms reveal that manually selected inter-moss loops lie outside the norm, a combination of several characteristics will need to be used to determine the statistical likelihood that a group of events be categorized automatically as a loop of interest. The goal of this project is to be able to automatically pick out inter-moss loops from an entire data set and calculate the characteristics that have previously been determined manually, such as length, intensity, and lifetime. We will discuss the algorithms, preliminary results, and current progress of automatic characterization.

  14. Onboard Science and Applications Algorithm for Hyperspectral Data Reduction

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel

    2012-01-01

    An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.

  15. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    PubMed

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  16. The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors

    NASA Astrophysics Data System (ADS)

    Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.; MicroBooNE Collaboration

    2017-09-01

    The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.

  17. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2018-01-01

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  18. Automated Planning of Science Products Based on Nadir Overflights and Alerts for Onboard and Ground Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; McLaren, David A.; Rabideau, Gregg R.; Mandl, Daniel; Hengemihle, Jerry

    2013-01-01

    A set of automated planning algorithms is the current operations baseline approach for the Intelligent Payload Module (IPM) of the proposed Hyper spectral Infrared Imager (HyspIRI) mission. For this operations concept, there are only local (e.g. non-depletable) operations constraints, such as real-time downlink and onboard memory, and the forward sweeping algorithm is optimal for determining which science products should be generated onboard and on ground based on geographical overflights, science priorities, alerts, requests, and onboard and ground processing constraints. This automated planning approach was developed for the HyspIRI IPM concept. The HyspIRI IPM is proposed to use an X-band Direct Broadcast (DB) capability that would enable data to be delivered to ground stations virtually as it is acquired. However, the HyspIRI VSWIR and TIR instruments will produce approximately 1 Gbps data, while the DB capability is 15 Mbps for a approx. =60X oversubscription. In order to address this mismatch, this innovation determines which data to downlink based on both the type of surface the spacecraft is overflying, and the onboard processing of data to detect events. For example, when the spacecraft is overflying Polar Regions, it might downlink a snow/ice product. Additionally, the onboard software will search for thermal signatures indicative of a volcanic event or wild fire and downlink summary information (extent, spectra) when detected, thereby reducing data volume. The planning system described above automatically generated the IPM mission plan based on requested products, the overflight regions, and available resources.

  19. Alexithymia, empathy, emotion identification and social inference in anorexia nervosa: A case-control study.

    PubMed

    Gramaglia, Carla; Ressico, Francesca; Gambaro, Eleonora; Palazzolo, Anna; Mazzarino, Massimiliano; Bert, Fabrizio; Siliquini, Roberta; Zeppegno, Patrizia

    2016-08-01

    Alexithymia, difficulties in facial emotion recognition, poor socio-relational skills are typical of anorexia nervosa (AN). We assessed patients with AN and healthy controls (HCs) with mixed stimuli: questionnaires (Toronto Alexithymia Scale-TAS, Interpersonal Reactivity Index-IRI), photographs (Facial Emotion Identification Test-FEIT) and dynamic images (The Awareness of Social Inference Test-TASIT). TAS and IRI Personal Distress (PD) were higher in AN than HCs. Few or no differences emerged at the FEIT and TASIT, respectively. Larger effect sizes were found for the TAS results. Despite higher levels of alexithymia, patients with AN seem to properly acknowledge others' emotions while being inhibited in the expression of their own. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A GPU-paralleled implementation of an enhanced face recognition algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Liu, Xiyang; Shao, Shuai; Zan, Jiguo

    2013-03-01

    Face recognition algorithm based on compressed sensing and sparse representation is hotly argued in these years. The scheme of this algorithm increases recognition rate as well as anti-noise capability. However, the computational cost is expensive and has become a main restricting factor for real world applications. In this paper, we introduce a GPU-accelerated hybrid variant of face recognition algorithm named parallel face recognition algorithm (pFRA). We describe here how to carry out parallel optimization design to take full advantage of many-core structure of a GPU. The pFRA is tested and compared with several other implementations under different data sample size. Finally, Our pFRA, implemented with NVIDIA GPU and Computer Unified Device Architecture (CUDA) programming model, achieves a significant speedup over the traditional CPU implementations.

  1. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  2. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  3. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2018-01-29

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  4. Hardware friendly probabilistic spiking neural network with long-term and short-term plasticity.

    PubMed

    Hsieh, Hung-Yi; Tang, Kea-Tiong

    2013-12-01

    This paper proposes a probabilistic spiking neural network (PSNN) with unimodal weight distribution, possessing long- and short-term plasticity. The proposed algorithm is derived by both the arithmetic gradient decent calculation and bioinspired algorithms. The algorithm is benchmarked by the Iris and Wisconsin breast cancer (WBC) data sets. The network features fast convergence speed and high accuracy. In the experiment, the PSNN took not more than 40 epochs for convergence. The average testing accuracy for Iris and WBC data is 96.7% and 97.2%, respectively. To test the usefulness of the PSNN to real world application, the PSNN was also tested with the odor data, which was collected by our self-developed electronic nose (e-nose). Compared with the algorithm (K-nearest neighbor) that has the highest classification accuracy in the e-nose for the same odor data, the classification accuracy of the PSNN is only 1.3% less but the memory requirement can be reduced at least 40%. All the experiments suggest that the PSNN is hardware friendly. First, it requires only nine-bits weight resolution for training and testing. Second, the PSNN can learn complex data sets with a little number of neurons that in turn reduce the cost of VLSI implementation. In addition, the algorithm is insensitive to synaptic noise and the parameter variation induced by the VLSI fabrication. Therefore, the algorithm can be implemented by either software or hardware, making it suitable for wider application.

  5. Score-Level Fusion of Phase-Based and Feature-Based Fingerprint Matching Algorithms

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Morita, Ayumi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper proposes an efficient fingerprint recognition algorithm combining phase-based image matching and feature-based matching. In our previous work, we have already proposed an efficient fingerprint recognition algorithm using Phase-Only Correlation (POC), and developed commercial fingerprint verification units for access control applications. The use of Fourier phase information of fingerprint images makes it possible to achieve robust recognition for weakly impressed, low-quality fingerprint images. This paper presents an idea of improving the performance of POC-based fingerprint matching by combining it with feature-based matching, where feature-based matching is introduced in order to improve recognition efficiency for images with nonlinear distortion. Experimental evaluation using two different types of fingerprint image databases demonstrates efficient recognition performance of the combination of the POC-based algorithm and the feature-based algorithm.

  6. Model and algorithmic framework for detection and correction of cognitive errors.

    PubMed

    Feki, Mohamed Ali; Biswas, Jit; Tolstikov, Andrei

    2009-01-01

    This paper outlines an approach that we are taking for elder-care applications in the smart home, involving cognitive errors and their compensation. Our approach involves high level modeling of daily activities of the elderly by breaking down these activities into smaller units, which can then be automatically recognized at a low level by collections of sensors placed in the homes of the elderly. This separation allows us to employ plan recognition algorithms and systems at a high level, while developing stand-alone activity recognition algorithms and systems at a low level. It also allows the mixing and matching of multi-modality sensors of various kinds that go to support the same high level requirement. Currently our plan recognition algorithms are still at a conceptual stage, whereas a number of low level activity recognition algorithms and systems have been developed. Herein we present our model for plan recognition, providing a brief survey of the background literature. We also present some concrete results that we have achieved for activity recognition, emphasizing how these results are incorporated into the overall plan recognition system.

  7. Physical environment virtualization for human activities recognition

    NASA Astrophysics Data System (ADS)

    Poshtkar, Azin; Elangovan, Vinayak; Shirkhodaie, Amir; Chan, Alex; Hu, Shuowen

    2015-05-01

    Human activity recognition research relies heavily on extensive datasets to verify and validate performance of activity recognition algorithms. However, obtaining real datasets are expensive and highly time consuming. A physics-based virtual simulation can accelerate the development of context based human activity recognition algorithms and techniques by generating relevant training and testing videos simulating diverse operational scenarios. In this paper, we discuss in detail the requisite capabilities of a virtual environment to aid as a test bed for evaluating and enhancing activity recognition algorithms. To demonstrate the numerous advantages of virtual environment development, a newly developed virtual environment simulation modeling (VESM) environment is presented here to generate calibrated multisource imagery datasets suitable for development and testing of recognition algorithms for context-based human activities. The VESM environment serves as a versatile test bed to generate a vast amount of realistic data for training and testing of sensor processing algorithms. To demonstrate the effectiveness of VESM environment, we present various simulated scenarios and processed results to infer proper semantic annotations from the high fidelity imagery data for human-vehicle activity recognition under different operational contexts.

  8. Vehicle logo recognition using multi-level fusion model

    NASA Astrophysics Data System (ADS)

    Ming, Wei; Xiao, Jianli

    2018-04-01

    Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.

  9. SOME DIFFERENCES BETWEEN SILENT AND ORAL READING RESPONSES ON A STANDARDIZED READING TEST.

    ERIC Educational Resources Information Center

    LEIBERT, ROBERT E.

    A STUDY DESIGNED TO IDENTIFY SOME OF THE DIFFERENCES BETWEEN THE RESPONSES ON THE GATES ADVANCED PRIMARY READING TEST AND THE KINDS OF RESPONSES OBTAINED FROM AN INFORMAL READING INVENTORY (IRI) IS REPORTED. SUBJECTS WERE 65 THIRD-GRADE PUPILS IN WEST BABYLON, NEW YORK. PUPILS AT THE SAME INSTRUCTIONAL LEVEL SCORED HIGHER IN THE RECOGNITION TEST…

  10. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar

    PubMed Central

    Shin, Young Hoon; Seo, Jiwon

    2016-01-01

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker’s vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing. PMID:27801867

  11. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar.

    PubMed

    Shin, Young Hoon; Seo, Jiwon

    2016-10-29

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker's vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing.

  12. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system.

    PubMed

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.

  14. The implementation of aerial object recognition algorithm based on contour descriptor in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Babayan, Pavel; Smirnov, Sergey; Strotov, Valery

    2017-10-01

    This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  15. Chinese License Plates Recognition Method Based on A Robust and Efficient Feature Extraction and BPNN Algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue

    2018-04-01

    The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.

  16. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  17. Smoothness of pavements in Connecticut (phase 2-report) data analyses and trends

    DOT National Transportation Integrated Search

    2001-06-01

    The Connecticut Department of Transportation annually collects roughness data for the entire state highway system. The data are obtained via an ARAN system and are provided in the form of IRI units processed through a quarter-car-algorithm. Research ...

  18. Multifeature-based high-resolution palmprint recognition.

    PubMed

    Dai, Jifeng; Zhou, Jie

    2011-05-01

    Palmprint is a promising biometric feature for use in access control and forensic applications. Previous research on palmprint recognition mainly concentrates on low-resolution (about 100 ppi) palmprints. But for high-security applications (e.g., forensic usage), high-resolution palmprints (500 ppi or higher) are required from which more useful information can be extracted. In this paper, we propose a novel recognition algorithm for high-resolution palmprint. The main contributions of the proposed algorithm include the following: 1) use of multiple features, namely, minutiae, density, orientation, and principal lines, for palmprint recognition to significantly improve the matching performance of the conventional algorithm. 2) Design of a quality-based and adaptive orientation field estimation algorithm which performs better than the existing algorithm in case of regions with a large number of creases. 3) Use of a novel fusion scheme for an identification application which performs better than conventional fusion methods, e.g., weighted sum rule, SVMs, or Neyman-Pearson rule. Besides, we analyze the discriminative power of different feature combinations and find that density is very useful for palmprint recognition. Experimental results on the database containing 14,576 full palmprints show that the proposed algorithm has achieved a good performance. In the case of verification, the recognition system's False Rejection Rate (FRR) is 16 percent, which is 17 percent lower than the best existing algorithm at a False Acceptance Rate (FAR) of 10(-5), while in the identification experiment, the rank-1 live-scan partial palmprint recognition rate is improved from 82.0 to 91.7 percent.

  19. Manual limbal markings versus iris-registration software for correction of myopic astigmatism by laser in situ keratomileusis.

    PubMed

    Shen, Elizabeth P; Chen, Wei-Li; Hu, Fung-Rong

    2010-03-01

    To compare the efficacy and safety of manual limbal markings and wavefront-guided treatment with iris-registration software in laser in situ keratomileusis (LASIK) for myopic astigmatism. National Taiwan University Hospital, Taipei, Taiwan. Eyes with myopic astigmatism had LASIK with a Technolas 217z laser. Eyes in the limbal-marking group had conventional LASIK (PlanoScan or Zyoptix tissue-saving algorithm) with manual cyclotorsional-error adjustments according to 2 limbal marks. Eyes in the iris-registration group had wavefront-guided ablation (Zyoptix) in which cyclotorsional errors were automatically detected and adjusted. Refraction, corneal topography, and visual acuity data were compared between groups. Vector analysis was by the Alpins method. The mean preoperative spherical equivalent (SE) was -6.64 diopters (D) +/- 1.99 (SD) in the limbal-marking group and -6.72 +/- 1.86 D in the iris-registration group (P = .92). At 6 months, the mean SE was -0.42 +/- 0.63 D and -0.47 +/- 0.62 D, respectively (P = .08). There was no statistically significant difference between groups in the astigmatism correction, success, or flattening index values using 6-month postoperative refractive data. The angle of error was within +/-10 degrees in 73% of eyes in the limbal-marking group and 75% of eyes in the iris-registration group. Manual limbal markings and iris-registration software were equally effective and safe in LASIK for myopic astigmatism, showing that checking cyclotorsion by manual limbal markings is a safe alternative when automated systems are not available. Copyright 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  20. Application of IRI-Plas in Ionospheric Tomography and HF Communication Studies with Assimilation of GPS-TEC

    NASA Astrophysics Data System (ADS)

    Arikan, Feza; Gulyaeva, Tamara; Sezen, Umut; Arikan, Orhan; Toker, Cenk; Hakan Tuna, MR.; Erdem, Esra

    2016-07-01

    International Reference Ionosphere is the most acknowledged climatic model of ionosphere that provides electron density profile and hourly, monthly median values of critical layer parameters of the ionosphere for a desired location, date and time between 60 to 2,000 km altitude. IRI is also accepted as the International Standard Ionosphere model. Recently, the IRI model is extended to the Global Positioning System (GPS) satellite orbital range of 20,000 km. The new version is called IRI-Plas and it can be obtained from http://ftp.izmiran.ru/pub/izmiran /SPIM/. A user-friendly online version is also provided at www.ionolab.org as a space weather service. Total Electron Content (TEC), which is defined as the line integral of electron density on a given ray path, is an observable parameter that can be estimated from earth based GPS receivers in a cost-effective manner as GPS-TEC. One of the most important advantages of IRI-Plas is the possible input of GPS-TEC to update the background deterministic ionospheric model to the current ionospheric state. This option is highly useful in regional and global tomography studies and HF link assessments. IONOLAB group currently implements IRI-Plas as a background model and updates the ionospheric state using GPS-TEC in IONOLAB-CIT and IONOLAB-RAY algorithms. The improved state of ionosphere allows the most reliable 4-D imaging of electron density profiles and HF and satellite communication link simulations.This study is supported by TUBITAK 115E915 and joint TUBITAK 114E092 and AS CR 14/001.

  1. Tomography Reconstruction of Ionospheric Electron Density with Empirical Orthonormal Functions Using Korea GNSS Network

    NASA Astrophysics Data System (ADS)

    Hong, Junseok; Kim, Yong Ha; Chung, Jong-Kyun; Ssessanga, Nicholas; Kwak, Young-Sil

    2017-03-01

    In South Korea, there are about 80 Global Positioning System (GPS) monitoring stations providing total electron content (TEC) every 10 min, which can be accessed through Korea Astronomy and Space Science Institute (KASI) for scientific use. We applied the computerized ionospheric tomography (CIT) algorithm to the TEC dataset from this GPS network for monitoring the regional ionosphere over South Korea. The algorithm utilizes multiplicative algebraic reconstruction technique (MART) with an initial condition of the latest International Reference Ionosphere-2016 model (IRI-2016). In order to reduce the number of unknown variables, the vertical profiles of electron density are expressed with a linear combination of empirical orthonormal functions (EOFs) that were derived from the IRI empirical profiles. Although the number of receiver sites is much smaller than that of Japan, the CIT algorithm yielded reasonable structure of the ionosphere over South Korea. We verified the CIT results with NmF2 from ionosondes in Icheon and Jeju and also with GPS TEC at the center of South Korea. In addition, the total time required for CIT calculation was only about 5 min, enabling the exploration of the vertical ionospheric structure in near real time.

  2. Evaluation and application of new AVIRIS data for the study of coral reefs in Hawaiian Islands

    NASA Astrophysics Data System (ADS)

    Wei, J.; Lee, Z.

    2017-12-01

    During the HyspIRI Hawaii campaign in early 2017, we collected hyperspectral remote sensing reflectance over coral reef environments in Kaneohe Bay in Oahu and the coastal waters of Maui Island. Based on in-situ measurements, we evaluated the data quality of reflectance measurements by the Airborne Visible-Infrared Imaging Spectrometer (AVIRIS). Further, these data were used to refine the remote sensing algorithms for identification of live corals, water bathymetry, and water clarity for the entire flight lines. Our results suggested great improvement in our understanding and capabilities of using HyspIRI-like data to observe and monitor coral reef environments.

  3. Broadband infrared imaging spectroscopy for standoff detection of trace explosives

    NASA Astrophysics Data System (ADS)

    Kendziora, Christopher A.; Furstenberg, Robert; Papantonakis, Michael; Nguyen, Viet; McGill, R. Andrew

    2016-05-01

    This manuscript describes advancements toward a mobile platform for standoff detection of trace explosives on relevant substrates using broadband infrared spectroscopic imaging. In conjunction with this, we are developing a technology for detection based on photo-thermal infrared (IR) imaging spectroscopy (PT-IRIS). PT-IRIS leverages one or more IR quantum cascade lasers (QCL), tuned to strong absorption bands in the analytes and directed to illuminate an area on a surface of interest. An IR focal plane array is used to image the surface thermal emission upon laser illumination. The PT-IRIS signal is processed as a hyperspectral image cube comprised of spatial, spectral and temporal dimensions as vectors within a detection algorithm. Here we describe methods to increase both sensitivity to trace explosives and selectivity between different analyte types by exploiting a broader spectral range than in previous configurations. Previously we demonstrated PT-IRIS at several meters of standoff distance indoors and in field tests, while operating the lasers below the infrared eye-safe intensity limit (100 mW/cm2). Sensitivity to explosive traces as small as a single 10 μm diameter particle (~1 ng) has been demonstrated.

  4. Hybrid simulated annealing and its application to optimization of hidden Markov models for visual speech recognition.

    PubMed

    Lee, Jong-Seok; Park, Cheol Hoon

    2010-08-01

    We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.

  5. Research on autonomous identification of airport targets based on Gabor filtering and Radon transform

    NASA Astrophysics Data System (ADS)

    Yi, Juan; Du, Qingyu; Zhang, Hong jiang; Zhang, Yao lei

    2017-11-01

    Target recognition is a leading key technology in intelligent image processing and application development at present, with the enhancement of computer processing ability, autonomous target recognition algorithm, gradually improve intelligence, and showed good adaptability. Taking the airport target as the research object, analysis the airport layout characteristics, construction of knowledge model, Gabor filter and Radon transform based on the target recognition algorithm of independent design, image processing and feature extraction of the airport, the algorithm was verified, and achieved better recognition results.

  6. Remote Sensing of Ionosphere by IONOLAB Group

    NASA Astrophysics Data System (ADS)

    Arikan, Feza

    2016-07-01

    Ionosphere is a temporally and spatially varying, dispersive, anisotropic and inhomogeneous medium that is characterized primarily by its electron density distribution. Electron density is a complex function of spatial and temporal variations of solar, geomagnetic, and seismic activities. Ionosphere is the main source of error for navigation and positioning systems and satellite communication. Therefore, characterization and constant monitoring of variability of the ionosphere is of utmost importance for the performance improvement of these systems. Since ionospheric electron density is not a directly measurable quantity, an important derivable parameter is the Total Electron Content (TEC), which is used widely to characterize the ionosphere. TEC is proportional to the total number of electrons on a line crossing the atmosphere. IONOLAB is a research group is formed by Hacettepe University, Bilkent University and Kastamonu University, Turkey gathered to handle the challenges of the ionosphere using state-of-the-art remote sensing and signal processing techniques. IONOLAB group provides unique space weather services of IONOLAB-TEC, International Reference Ionosphere extended to Plasmasphere (IRI-Plas) model based IRI-Plas-MAP, IRI-Plas-STEC and Online IRI-Plas-2015 model at www.ionolab.org. IONOLAB group has been working for imaging and monitoring of ionospheric structure for the last 15 years. TEC is estimated from dual frequency GPS receivers as IONOLAB-TEC using IONOLAB-BIAS. For high spatio-temporal resolution 2-D imaging or mapping, IONOLAB-MAP algorithm is developed that uses automated Universal Kriging or Ordinary Kriging in which the experimental semivariogram is fitted to Matern Function with Particle Swarm Optimization (PSO). For 3-D imaging of ionosphere and 1-D vertical profiles of electron density, state-of-the-art IRI-Plas model based IONOLAB-CIT algorithm is developed for regional reconstruction that employs Kalman Filters for state/temporal transition. IONOLAB group contributes to remote sensing of upper atmosphere, ionosphere and plasmasphere with continuing TUBITAK projects. IONOLAB group is open to joint research and collaboration with researchers from all disciplines that investigate the challenges of ionosphere and space weather. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.

  7. Speech recognition for embedded automatic positioner for laparoscope

    NASA Astrophysics Data System (ADS)

    Chen, Xiaodong; Yin, Qingyun; Wang, Yi; Yu, Daoyin

    2014-07-01

    In this paper a novel speech recognition methodology based on Hidden Markov Model (HMM) is proposed for embedded Automatic Positioner for Laparoscope (APL), which includes a fixed point ARM processor as the core. The APL system is designed to assist the doctor in laparoscopic surgery, by implementing the specific doctor's vocal control to the laparoscope. Real-time respond to the voice commands asks for more efficient speech recognition algorithm for the APL. In order to reduce computation cost without significant loss in recognition accuracy, both arithmetic and algorithmic optimizations are applied in the method presented. First, depending on arithmetic optimizations most, a fixed point frontend for speech feature analysis is built according to the ARM processor's character. Then the fast likelihood computation algorithm is used to reduce computational complexity of the HMM-based recognition algorithm. The experimental results show that, the method shortens the recognition time within 0.5s, while the accuracy higher than 99%, demonstrating its ability to achieve real-time vocal control to the APL.

  8. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  9. Tracking and recognition face in videos with incremental local sparse representation model

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  10. Face sketch recognition based on edge enhancement via deep learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  11. Cognitive object recognition system (CORS)

    NASA Astrophysics Data System (ADS)

    Raju, Chaitanya; Varadarajan, Karthik Mahesh; Krishnamurthi, Niyant; Xu, Shuli; Biederman, Irving; Kelley, Troy

    2010-04-01

    We have developed a framework, Cognitive Object Recognition System (CORS), inspired by current neurocomputational models and psychophysical research in which multiple recognition algorithms (shape based geometric primitives, 'geons,' and non-geometric feature-based algorithms) are integrated to provide a comprehensive solution to object recognition and landmarking. Objects are defined as a combination of geons, corresponding to their simple parts, and the relations among the parts. However, those objects that are not easily decomposable into geons, such as bushes and trees, are recognized by CORS using "feature-based" algorithms. The unique interaction between these algorithms is a novel approach that combines the effectiveness of both algorithms and takes us closer to a generalized approach to object recognition. CORS allows recognition of objects through a larger range of poses using geometric primitives and performs well under heavy occlusion - about 35% of object surface is sufficient. Furthermore, geon composition of an object allows image understanding and reasoning even with novel objects. With reliable landmarking capability, the system improves vision-based robot navigation in GPS-denied environments. Feasibility of the CORS system was demonstrated with real stereo images captured from a Pioneer robot. The system can currently identify doors, door handles, staircases, trashcans and other relevant landmarks in the indoor environment.

  12. Face recognition algorithm using extended vector quantization histogram features.

    PubMed

    Yan, Yan; Lee, Feifei; Wu, Xueqian; Chen, Qiu

    2018-01-01

    In this paper, we propose a face recognition algorithm based on a combination of vector quantization (VQ) and Markov stationary features (MSF). The VQ algorithm has been shown to be an effective method for generating features; it extracts a codevector histogram as a facial feature representation for face recognition. Still, the VQ histogram features are unable to convey spatial structural information, which to some extent limits their usefulness in discrimination. To alleviate this limitation of VQ histograms, we utilize Markov stationary features (MSF) to extend the VQ histogram-based features so as to add spatial structural information. We demonstrate the effectiveness of our proposed algorithm by achieving recognition results superior to those of several state-of-the-art methods on publicly available face databases.

  13. A Random Forest-based ensemble method for activity recognition.

    PubMed

    Feng, Zengtao; Mo, Lingfei; Li, Meng

    2015-01-01

    This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.

  14. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2012-01-01

    described in III. Experimental evaluations on a comprehensive multimodal dataset and a face database have been described in section V. Finally, in...WVU Multimodal Dataset The WVU multimodal dataset is a comprehensive collection of different biometric modalities such as fingerprint, iris, palmprint ...Martnez and R. Benavente, “The AR face database ,” CVC Technical Report, June 1998. [29] U. Park and A. Jain, “Face matching and retrieval using soft

  15. Joint Sparse Representation for Robust Multimodal Biometrics Recognition

    DTIC Science & Technology

    2014-01-01

    comprehensive multimodal dataset and a face database are described in section V. Finally, in section VI, we discuss the computational complexity of...fingerprint, iris, palmprint , hand geometry and voice from subjects of different age, gender and ethnicity as described in Table I. It is a...Taylor, “Constructing nonlinear discriminants from multiple data views,” Machine Learning and Knowl- edge Discovery in Databases , pp. 328–343, 2010

  16. Integrated Color Coding and Monochrome Multi-Spectral Fusion

    DTIC Science & Technology

    1999-01-01

    Suppl. pg s1002. [Katz 1987 ] Katz et al, "Application of Spectral Filtering to Missile Detection Using Staring Sensors at MWIR Wavelengths...34, Proceedings of the IRIS Conf. on Targets, Backgrounds, and Discrimination, Feb. 1987 [Morrone 1989], Morrone M.C. and D.C.,”Discrimination of spatial phase in...April) 1990, Orlando, FL. [Subramaniam 1997] Subramaniam and Biederman “Effect of Contrast Reversal on Object Recognition (ARVO 1997) Investigative

  17. Data Analysis of the Floating Potential Measurement Unit aboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Barjatya, Aroh; Swenson, Charles M.; Thompson, Donald C.; Wright, Kenneth H., Jr.

    2009-01-01

    We present data from the Floating Potential Measurement Unit (FPMU), that is deployed on the starboard (S1) truss of the International Space Station. The FPMU is a suite of instruments capable of redundant measurements of various plasma parameters. The instrument suite consists of: a Floating Potential Probe, a Wide-sweeping spherical Langmuir probe, a Narrow-sweeping cylindrical Langmuir Probe, and a Plasma Impedance Probe. This paper gives a brief overview of the instrumentation and the received data quality, and then presents the algorithm used to reduce I-V curves to plasma parameters. Several hours of data is presented from August 5th, 2006 and March 3rd, 2007. The FPMU derived plasma density and temperatures are compared with the International Reference Ionosphere (IRI) and USU-Global Assimilation of Ionospheric Measurement (USU-GAIM) models. Our results show that the derived in-situ density matches the USU-GAIM model better than the IRI, and the derived in-situ temperatures are comparable to the average temperatures given by the IRI.

  18. Optimal pattern synthesis for speech recognition based on principal component analysis

    NASA Astrophysics Data System (ADS)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  19. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    PubMed

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  20. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion.

    PubMed

    Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.

  1. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion

    PubMed Central

    Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699

  2. Ear biometrics for patient identification in global health: a cross-sectional study to test the feasibility of a simplified algorithm.

    PubMed

    Ragan, Elizabeth J; Johnson, Courtney; Milton, Jacqueline N; Gill, Christopher J

    2016-11-02

    One of the greatest public health challenges in low- and middle-income countries (LMICs) is identifying people over time and space. Recent years have seen an explosion of interest in developing electronic approaches to addressing this problem, with mobile technology at the forefront of these efforts. We investigate the possibility of biometrics as a simple, cost-efficient, and portable solution. Common biometrics approaches include fingerprinting, iris scanning and facial recognition, but all are less than ideal due to complexity, infringement on privacy, cost, or portability. Ear biometrics, however, proved to be a unique and viable solution. We developed an identification algorithm then conducted a cross sectional study in which we photographed left and right ears from 25 consenting adults. We then conducted re-identification and statistical analyses to identify the accuracy and replicability of our approach. Through principal component analysis, we found the curve of the ear helix to be the most reliable anatomical structure and the basis for re-identification. Although an individual ear allowed for high re-identification rate (88.3%), when both left and right ears were paired together, our rate of re-identification amidst the pool of potential matches was 100%. The results of this study have implications on future efforts towards building a biometrics solution for patient identification in LMICs. We provide a conceptual platform for further investigation into the development of an ear biometrics identification mobile application.

  3. Multi-frame knowledge based text enhancement for mobile phone captured videos

    NASA Astrophysics Data System (ADS)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-02-01

    In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.

  4. [A new peak detection algorithm of Raman spectra].

    PubMed

    Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing

    2014-01-01

    The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.

  5. Latent Ice Recrystallization Inhibition Activity in Nonantifreeze Proteins: Ca2+-Activated Plant Lectins and Cation-Activated Antimicrobial Peptides.

    PubMed

    Mitchell, Daniel E; Gibson, Matthew I

    2015-10-12

    Organisms living in polar regions have evolved a series of antifreeze (glyco) proteins (AFGPs) to enable them to survive by modulating the structure of ice. These proteins have huge potential for use in cellular cryopreservation, ice-resistant surfaces, frozen food, and cryosurgery, but they are limited by their relatively low availability and questions regarding their mode of action. This has triggered the search for biomimetic materials capable of reproducing this function. The identification of new structures and sequences capable of inhibiting ice growth is crucial to aid our understanding of these proteins. Here, we show that plant c-type lectins, which have similar biological function to human c-type lectins (glycan recognition) but no sequence homology to AFPs, display calcium-dependent ice recrystallization inhibition (IRI) activity. This IRI activity can be switched on/off by changing the Ca2+ concentration. To show that more (nonantifreeze) proteins may exist with the potential to display IRI, a second motif was considered, amphipathicity. All known AFPs have defined hydrophobic/hydrophilic domains, rationalizing this choice. The cheap, and widely used, antimicrobial Nisin was found to have cation-dependent IRI activity, controlled by either acid or addition of histidine-binding ions such as zinc or nickel, which promote its amphipathic structure. These results demonstrate a new approach in the identification of antifreeze protein mimetic macromolecules and may help in the development of synthetic mimics of AFPs.

  6. Latent Ice Recrystallization Inhibition Activity in Nonantifreeze Proteins: Ca2+-Activated Plant Lectins and Cation-Activated Antimicrobial Peptides

    PubMed Central

    2015-01-01

    Organisms living in polar regions have evolved a series of antifreeze (glyco) proteins (AFGPs) to enable them to survive by modulating the structure of ice. These proteins have huge potential for use in cellular cryopreservation, ice-resistant surfaces, frozen food, and cryosurgery, but they are limited by their relatively low availability and questions regarding their mode of action. This has triggered the search for biomimetic materials capable of reproducing this function. The identification of new structures and sequences capable of inhibiting ice growth is crucial to aid our understanding of these proteins. Here, we show that plant c-type lectins, which have similar biological function to human c-type lectins (glycan recognition) but no sequence homology to AFPs, display calcium-dependent ice recrystallization inhibition (IRI) activity. This IRI activity can be switched on/off by changing the Ca2+ concentration. To show that more (nonantifreeze) proteins may exist with the potential to display IRI, a second motif was considered, amphipathicity. All known AFPs have defined hydrophobic/hydrophilic domains, rationalizing this choice. The cheap, and widely used, antimicrobial Nisin was found to have cation-dependent IRI activity, controlled by either acid or addition of histidine-binding ions such as zinc or nickel, which promote its amphipathic structure. These results demonstrate a new approach in the identification of antifreeze protein mimetic macromolecules and may help in the development of synthetic mimics of AFPs. PMID:26407233

  7. An improved finger-vein recognition algorithm based on template matching

    NASA Astrophysics Data System (ADS)

    Liu, Yueyue; Di, Si; Jin, Jian; Huang, Daoping

    2016-10-01

    Finger-vein recognition has became the most popular biometric identify methods. The investigation on the recognition algorithms always is the key point in this field. So far, there are many applicable algorithms have been developed. However, there are still some problems in practice, such as the variance of the finger position which may lead to the image distortion and shifting; during the identification process, some matching parameters determined according to experience may also reduce the adaptability of algorithm. Focus on above mentioned problems, this paper proposes an improved finger-vein recognition algorithm based on template matching. In order to enhance the robustness of the algorithm for the image distortion, the least squares error method is adopted to correct the oblique finger. During the feature extraction, local adaptive threshold method is adopted. As regard as the matching scores, we optimized the translation preferences as well as matching distance between the input images and register images on the basis of Naoto Miura algorithm. Experimental results indicate that the proposed method can improve the robustness effectively under the finger shifting and rotation conditions.

  8. Blinking supervision in a working environment

    NASA Astrophysics Data System (ADS)

    Morcego, Bernardo; Argilés, Marc; Cabrerizo, Marc; Cardona, Genís; Pérez, Ramon; Pérez-Cabré, Elisabet; Gispets, Joan

    2016-02-01

    The health of the ocular surface requires blinks of the eye to be frequent in order to provide moisture and to renew the tear film. However, blinking frequency has been shown to decrease in certain conditions such as when subjects are conducting tasks with high cognitive and visual demands. These conditions are becoming more common as people work or spend their leisure time in front of video display terminals. Supervision of blinking frequency in such environments is possible, thanks to the availability of computer-integrated cameras. Therefore, the aim of the present study is to develop an algorithm for the detection of eye blinks and to test it, in a number of videos captured, while subjects are conducting a variety of tasks in front of the computer. The sensitivity of the algorithm for blink detection was found to be of 87.54% (range 30% to 100%), with a mean false-positive rate of 0.19% (range 0% to 1.7%), depending on the illumination conditions during which the image was captured and other computer-user spatial configurations. The current automatic process is based on a partly modified pre-existing eye detection and image processing algorithms and consists of four stages that are aimed at eye detection, eye tracking, iris detection and segmentation, and iris height/width ratio assessment.

  9. HyspIRI Measurements of Agricultural Systems in California: 2013-2015

    NASA Astrophysics Data System (ADS)

    Townsend, P. A.; Kruger, E. L.; Singh, A.; Jablonski, A. D.; Kochaver, S.; Serbin, S.

    2015-12-01

    During 2013-2015, NASA collected high-altitude AVIRIS hyperspectral and MASTER thermal infrared imagery across large swaths of California in support of the HyspIRI planning and prototyping activities. During these campaigns, we made extensive measurements of photosynthetic capacity—Vcmax and Jmax—and their temperature sensitivities across a range of sites, crop types and environmental conditions. Our objectives were to characterize the physiological diversity of agricultural vegetation in California and develop generalizable algorithms to map these physiological parameters across several image acquisitions, regardless of crop type and canopy temperatures. We employed AVIRIS imagery to scale and estimate the vegetation parameters and MASTER surface temperature to provide context, since physiology responds exponentially to leaf temperature. We demonstrate a segmentation approach to disentangling leaf and background soil temperature, and then illustrate our retrievals of Vcmax and Jmax during overflight conditions across a large number of the 2013-2015 HyspIRI acquisitions. Our results show >80% repeatability (R2) across split sample jack-knifing, with RMSEs within 15% of the range of our data. The approach was robust across crop types (e.g., grape, almond, pistachio, avocado, pomegranate, oats, peppers, citrus, date palm, alfalfa, melons, beets) and leaf temperatures. A global imaging spectroscopy system such as HyspIRI will offer unprecedented ability to monitor agricultural crop performance under widely varying surface conditions.

  10. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images

    PubMed Central

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-01-01

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition. PMID:29786665

  11. Automated Field-of-View, Illumination, and Recognition Algorithm Design of a Vision System for Pick-and-Place Considering Colour Information in Illumination and Images.

    PubMed

    Chen, Yibing; Ogata, Taiki; Ueyama, Tsuyoshi; Takada, Toshiyuki; Ota, Jun

    2018-05-22

    Machine vision is playing an increasingly important role in industrial applications, and the automated design of image recognition systems has been a subject of intense research. This study has proposed a system for automatically designing the field-of-view (FOV) of a camera, the illumination strength and the parameters in a recognition algorithm. We formulated the design problem as an optimisation problem and used an experiment based on a hierarchical algorithm to solve it. The evaluation experiments using translucent plastics objects showed that the use of the proposed system resulted in an effective solution with a wide FOV, recognition of all objects and 0.32 mm and 0.4° maximal positional and angular errors when all the RGB (red, green and blue) for illumination and R channel image for recognition were used. Though all the RGB illumination and grey scale images also provided recognition of all the objects, only a narrow FOV was selected. Moreover, full recognition was not achieved by using only G illumination and a grey-scale image. The results showed that the proposed method can automatically design the FOV, illumination and parameters in the recognition algorithm and that tuning all the RGB illumination is desirable even when single-channel or grey-scale images are used for recognition.

  12. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  13. Innate Immune Regulations and Liver Ischemia Reperfusion Injury

    PubMed Central

    Lu, Ling; Zhou, Haoming; Ni, Ming; Wang, Xuehao; Busuttil, Ronald; Kupiec-Weglinski, Jerzy; Zhai, Yuan

    2016-01-01

    Liver ischemia reperfusion activates innate immune system to drive the full development of inflammatory hepatocellular injury. Damage-associated molecular patterns (DAMPs) stimulate myeloid and dendritic cells via pattern recognition receptors (PRRs) to initiate the immune response. Complex intracellular signaling network transduces inflammatory signaling to regulate both innate immune cell activation and parenchymal cell death. Recent studies have revealed that DAMPs may trigger not only proinflammatory, but also immune regulatory responses by activating different PRRs or distinctive intracellular signaling pathways or in special cell populations. Additionally, tissue injury milieu activates PRR-independent receptors which also regulate inflammatory disease processes. Thus, the innate immune mechanism of liver IRI involves diverse molecular and cellular interactions, subjected to both endogenous and exogenous regulation in different cells. A better understanding of these complicated regulatory pathways/network is imperative for us in designing safe and effective therapeutic strategy to ameliorate liver IRI in patients. PMID:27861288

  14. Eye movement identification based on accumulated time feature

    NASA Astrophysics Data System (ADS)

    Guo, Baobao; Wu, Qiang; Sun, Jiande; Yan, Hua

    2017-06-01

    Eye movement is a new kind of feature for biometrical recognition, it has many advantages compared with other features such as fingerprint, face, and iris. It is not only a sort of static characteristics, but also a combination of brain activity and muscle behavior, which makes it effective to prevent spoofing attack. In addition, eye movements can be incorporated with faces, iris and other features recorded from the face region into multimode systems. In this paper, we do an exploring study on eye movement identification based on the eye movement datasets provided by Komogortsev et al. in 2011 with different classification methods. The time of saccade and fixation are extracted from the eye movement data as the eye movement features. Furthermore, the performance analysis was conducted on different classification methods such as the BP, RBF, ELMAN and SVM in order to provide a reference to the future research in this field.

  15. Theory of Mind and Empathy in Children With ADHD.

    PubMed

    Maoz, Hagai; Gvirts, Hila Z; Sheffer, Maya; Bloch, Yuval

    2017-05-01

    The current study compared empathy and theory of mind (ToM) between children with ADHD and healthy controls, and assessed changes in ToM among children with ADHD following administration of methylphenidate (MPH). Twenty-four children with ADHD (mean age = 10.3 years) were compared with 36 healthy controls. All children completed the interpersonal reactivity index (IRI), a self-reported empathy questionnaire, and performed the "faux-pas" recognition task (FPR). Children with ADHD performed the task with and without MPH. Children with ADHD showed significantly lower levels of self-reported empathy on most IRI subscales. FPR scores were significantly lower in children with ADHD and were improved, following the administration of MPH, to a level equal to that found in healthy controls. Children with ADHD show impaired self-reported empathy and FPR when compared with healthy controls. Stimulants improve FPR in children with ADHD to a level equal to that in healthy controls.

  16. Medical information security in the era of artificial intelligence.

    PubMed

    Wang, Yufeng; Wang, Liwei; Xue, Chang-Ao

    2018-06-01

    In recent years, biometric technologies, such as iris, facial, and finger vein recognition, have reached consumers and are being increasingly applied. However, it remains unknown whether these highly specific biometric technologies are as safe as declared by their manufacturers. As three-dimensional (3D) reconstruction based on medical imaging and 3D printing are being developed, these biometric technologies may face severe challenges. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors.

    PubMed

    Cippitelli, Enea; Gasparrini, Samuele; Gambi, Ennio; Spinsante, Susanna

    2016-01-01

    The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.

  18. Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images

    NASA Astrophysics Data System (ADS)

    Yao, Shoukui; Qin, Xiaojuan

    2018-02-01

    Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.

  19. Word recognition using a lexicon constrained by first/last character decisions

    NASA Astrophysics Data System (ADS)

    Zhao, Sheila X.; Srihari, Sargur N.

    1995-03-01

    In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.

  20. Vision-based posture recognition using an ensemble classifier and a vote filter

    NASA Astrophysics Data System (ADS)

    Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun

    2016-10-01

    Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.

  1. An Indoor Pedestrian Positioning Method Using HMM with a Fuzzy Pattern Recognition Algorithm in a WLAN Fingerprint System

    PubMed Central

    Ni, Yepeng; Liu, Jianbo; Liu, Shan; Bai, Yaxin

    2016-01-01

    With the rapid development of smartphones and wireless networks, indoor location-based services have become more and more prevalent. Due to the sophisticated propagation of radio signals, the Received Signal Strength Indicator (RSSI) shows a significant variation during pedestrian walking, which introduces critical errors in deterministic indoor positioning. To solve this problem, we present a novel method to improve the indoor pedestrian positioning accuracy by embedding a fuzzy pattern recognition algorithm into a Hidden Markov Model. The fuzzy pattern recognition algorithm follows the rule that the RSSI fading has a positive correlation to the distance between the measuring point and the AP location even during a dynamic positioning measurement. Through this algorithm, we use the RSSI variation trend to replace the specific RSSI value to achieve a fuzzy positioning. The transition probability of the Hidden Markov Model is trained by the fuzzy pattern recognition algorithm with pedestrian trajectories. Using the Viterbi algorithm with the trained model, we can obtain a set of hidden location states. In our experiments, we demonstrate that, compared with the deterministic pattern matching algorithm, our method can greatly improve the positioning accuracy and shows robust environmental adaptability. PMID:27618053

  2. Optical character recognition of handwritten Arabic using hidden Markov models

    NASA Astrophysics Data System (ADS)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  3. Optical character recognition of handwritten Arabic using hidden Markov models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.

    2011-01-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language ismore » initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.« less

  4. Hyperspectral face recognition with spatiospectral information fusion and PLS regression.

    PubMed

    Uzair, Muhammad; Mahmood, Arif; Mian, Ajmal

    2015-03-01

    Hyperspectral imaging offers new opportunities for face recognition via improved discrimination along the spectral dimension. However, it poses new challenges, including low signal-to-noise ratio, interband misalignment, and high data dimensionality. Due to these challenges, the literature on hyperspectral face recognition is not only sparse but is limited to ad hoc dimensionality reduction techniques and lacks comprehensive evaluation. We propose a hyperspectral face recognition algorithm using a spatiospectral covariance for band fusion and partial least square regression for classification. Moreover, we extend 13 existing face recognition techniques, for the first time, to perform hyperspectral face recognition.We formulate hyperspectral face recognition as an image-set classification problem and evaluate the performance of seven state-of-the-art image-set classification techniques. We also test six state-of-the-art grayscale and RGB (color) face recognition algorithms after applying fusion techniques on hyperspectral images. Comparison with the 13 extended and five existing hyperspectral face recognition techniques on three standard data sets show that the proposed algorithm outperforms all by a significant margin. Finally, we perform band selection experiments to find the most discriminative bands in the visible and near infrared response spectrum.

  5. Human body as a set of biometric features identified by means of optoelectronics

    NASA Astrophysics Data System (ADS)

    Podbielska, Halina; Bauer, Joanna

    2005-09-01

    Human body posses many unique, singular features that are impossible to copy or forge. Nowadays, to establish and to ensure the public security requires specially designed devices and systems. Biometrics is a field of science and technology, exploiting human body characteristics for people recognition. It identifies the most characteristic and unique ones in order to design and construct systems capable to recognize people. In this paper some overview is given, presenting the achievements in biometrics. The verification and identification process is explained, along with the way of evaluation of biometric recognition systems. The most frequently human biometrics used in practice are shortly presented, including fingerprints, facial imaging (including thermal characteristic), hand geometry and iris patterns.

  6. Frontal sinus recognition for human identification

    NASA Astrophysics Data System (ADS)

    Falguera, Juan Rogelio; Falguera, Fernanda Pereira Sartori; Marana, Aparecido Nilceu

    2008-03-01

    Many methods based on biometrics such as fingerprint, face, iris, and retina have been proposed for person identification. However, for deceased individuals, such biometric measurements are not available. In such cases, parts of the human skeleton can be used for identification, such as dental records, thorax, vertebrae, shoulder, and frontal sinus. It has been established in prior investigations that the radiographic pattern of frontal sinus is highly variable and unique for every individual. This has stimulated the proposition of measurements of the frontal sinus pattern, obtained from x-ray films, for skeletal identification. This paper presents a frontal sinus recognition method for human identification based on Image Foresting Transform and shape context. Experimental results (ERR = 5,82%) have shown the effectiveness of the proposed method.

  7. Collegial Activity Learning between Heterogeneous Sensors.

    PubMed

    Feuz, Kyle D; Cook, Diane J

    2017-11-01

    Activity recognition algorithms have matured and become more ubiquitous in recent years. However, these algorithms are typically customized for a particular sensor platform. In this paper we introduce PECO, a Personalized activity ECOsystem, that transfers learned activity information seamlessly between sensor platforms in real time so that any available sensor can continue to track activities without requiring its own extensive labeled training data. We introduce a multi-view transfer learning algorithm that facilitates this information handoff between sensor platforms and provide theoretical performance bounds for the algorithm. In addition, we empirically evaluate PECO using datasets that utilize heterogeneous sensor platforms to perform activity recognition. These results indicate that not only can activity recognition algorithms transfer important information to new sensor platforms, but any number of platforms can work together as colleagues to boost performance.

  8. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO

    PubMed Central

    Zhu, Zhichuan; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified. PMID:29853983

  9. Pulmonary Nodule Recognition Based on Multiple Kernel Learning Support Vector Machine-PSO.

    PubMed

    Li, Yang; Zhu, Zhichuan; Hou, Alin; Zhao, Qingdong; Liu, Liwei; Zhang, Lijuan

    2018-01-01

    Pulmonary nodule recognition is the core module of lung CAD. The Support Vector Machine (SVM) algorithm has been widely used in pulmonary nodule recognition, and the algorithm of Multiple Kernel Learning Support Vector Machine (MKL-SVM) has achieved good results therein. Based on grid search, however, the MKL-SVM algorithm needs long optimization time in course of parameter optimization; also its identification accuracy depends on the fineness of grid. In the paper, swarm intelligence is introduced and the Particle Swarm Optimization (PSO) is combined with MKL-SVM algorithm to be MKL-SVM-PSO algorithm so as to realize global optimization of parameters rapidly. In order to obtain the global optimal solution, different inertia weights such as constant inertia weight, linear inertia weight, and nonlinear inertia weight are applied to pulmonary nodules recognition. The experimental results show that the model training time of the proposed MKL-SVM-PSO algorithm is only 1/7 of the training time of the MKL-SVM grid search algorithm, achieving better recognition effect. Moreover, Euclidean norm of normalized error vector is proposed to measure the proximity between the average fitness curve and the optimal fitness curve after convergence. Through statistical analysis of the average of 20 times operation results with different inertial weights, it can be seen that the dynamic inertial weight is superior to the constant inertia weight in the MKL-SVM-PSO algorithm. In the dynamic inertial weight algorithm, the parameter optimization time of nonlinear inertia weight is shorter; the average fitness value after convergence is much closer to the optimal fitness value, which is better than the linear inertial weight. Besides, a better nonlinear inertial weight is verified.

  10. Acoustic diagnosis of pulmonary hypertension: automated speech- recognition-inspired classification algorithm outperforms physicians

    NASA Astrophysics Data System (ADS)

    Kaddoura, Tarek; Vadlamudi, Karunakar; Kumar, Shine; Bobhate, Prashant; Guo, Long; Jain, Shreepal; Elgendi, Mohamed; Coe, James Y.; Kim, Daniel; Taylor, Dylan; Tymchak, Wayne; Schuurmans, Dale; Zemp, Roger J.; Adatia, Ian

    2016-09-01

    We hypothesized that an automated speech- recognition-inspired classification algorithm could differentiate between the heart sounds in subjects with and without pulmonary hypertension (PH) and outperform physicians. Heart sounds, electrocardiograms, and mean pulmonary artery pressures (mPAp) were recorded simultaneously. Heart sound recordings were digitized to train and test speech-recognition-inspired classification algorithms. We used mel-frequency cepstral coefficients to extract features from the heart sounds. Gaussian-mixture models classified the features as PH (mPAp ≥ 25 mmHg) or normal (mPAp < 25 mmHg). Physicians blinded to patient data listened to the same heart sound recordings and attempted a diagnosis. We studied 164 subjects: 86 with mPAp ≥ 25 mmHg (mPAp 41 ± 12 mmHg) and 78 with mPAp < 25 mmHg (mPAp 17 ± 5 mmHg) (p  < 0.005). The correct diagnostic rate of the automated speech-recognition-inspired algorithm was 74% compared to 56% by physicians (p = 0.005). The false positive rate for the algorithm was 34% versus 50% (p = 0.04) for clinicians. The false negative rate for the algorithm was 23% and 68% (p = 0.0002) for physicians. We developed an automated speech-recognition-inspired classification algorithm for the acoustic diagnosis of PH that outperforms physicians that could be used to screen for PH and encourage earlier specialist referral.

  11. ENHANCED GENE DELIVERY IN PORCINE VASCULATURE TISSUE FOLLOWING INCORPORATION OF ADENO-ASSOCIATED VIRUS NANOPARTICLES INTO POROUS SILICON MICROPARTICLES

    PubMed Central

    McConnell, Kellie I.; Rhudy, Jessica; Yokoi, Kenji; Gu, Jianhua; Mack, Aaron; Suh, Junghae; La Francesca, Saverio; Sakamoto, Jason; Serda, Rita E.

    2014-01-01

    There is an unmet clinical need to increase lung transplant successes, patient satisfaction and to improve mortality rates. We offer the development of a nanovector-based solution that will reduce the incidence of lung ischemic reperfusion injury (IRI) leading to graft organ failure through the successful ex vivo treatment of the lung prior to transplantation. The innovation is in the integrated application of our novel porous silicon (pSi) microparticles carrying adeno-associated virus (AAV) nanoparticles, and the use of our ex vivo lung perfusion/ventilation system for the modulation of pro-inflammatory cytokines initiated by ischemic pulmonary conditions prior to organ transplant that often lead to complications. Gene delivery of anti-inflammatory agents to combat the inflammatory cascade may be a promising approach to prevent IRI following lung transplantation. The rationale for the device is that the microparticle will deliver a large payload of virus to cells and serve to protect the AAV from immune recognition. The microparticle-nanoparticle hybrid device was tested both in vitro on cell monolayers and ex vivo using either porcine venous tissue or a pig lung transplantation model, which recapitulates pulmonary IRI that occurs clinically post-transplantation. Remarkably, loading AAV vectors into pSi microparticles increases gene delivery to otherwise non-permissive endothelial cells. PMID:25180449

  12. Infrared photothermal imaging of trace explosives on relevant substrates

    NASA Astrophysics Data System (ADS)

    Kendziora, Christopher A.; Furstenberg, Robert; Papantonakis, Michael; Nguyen, Viet; Borchert, James; Byers, Jeff; McGill, R. Andrew

    2013-06-01

    We are developing a technique for the stand-off detection of trace explosives on relevant substrate surfaces using photo-thermal infrared (IR) imaging spectroscopy (PT-IRIS). This approach leverages one or more compact IR quantum cascade lasers, tuned to strong absorption bands in the analytes and directed to illuminate an area on a surface of interest. An IR focal plane array is used to image the surface and detect small increases in thermal emission upon laser illumination. The PT-IRIS signal is processed as a hyperspectral image cube comprised of spatial, spectral and temporal dimensions as vectors within a detection algorithm. The ability to detect trace analytes on relevant substrates is critical for stand-off applications, but is complicated by the optical and thermal analyte/substrate interactions. This manuscript describes recent PT-IRIS experimental results and analysis for traces of RDX, TNT, ammonium nitrate (AN) and sucrose on relevant substrates (steel, polyethylene, glass and painted steel panels). We demonstrate that these analytes can be detected on these substrates at relevant surface mass loadings (10 μg/cm2 to 100 μg/cm2) even at the single pixel level.

  13. A Model-Based Approach for the Measurement of Eye Movements Using Image Processing

    NASA Technical Reports Server (NTRS)

    Sung, Kwangjae; Reschke, Millard F.

    1997-01-01

    This paper describes a video eye-tracking algorithm which searches for the best fit of the pupil modeled as a circular disk. The algorithm is robust to common image artifacts such as the droopy eyelids and light reflections while maintaining the measurement resolution available by the centroid algorithm. The presented algorithm is used to derive the pupil size and center coordinates, and can be combined with iris-tracking techniques to measure ocular torsion. A comparison search method of pupil candidates using pixel coordinate reference lookup tables optimizes the processing requirements for a least square fit of the circular disk model. This paper includes quantitative analyses and simulation results for the resolution and the robustness of the algorithm. The algorithm presented in this paper provides a platform for a noninvasive, multidimensional eye measurement system which can be used for clinical and research applications requiring the precise recording of eye movements in three-dimensional space.

  14. A novel speech processing algorithm based on harmonicity cues in cochlear implant

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Chen, Yousheng; Zhang, Zongping; Chen, Yan; Zhang, Weifeng

    2017-08-01

    This paper proposed a novel speech processing algorithm in cochlear implant, which used harmonicity cues to enhance tonal information in Mandarin Chinese speech recognition. The input speech was filtered by a 4-channel band-pass filter bank. The frequency ranges for the four bands were: 300-621, 621-1285, 1285-2657, and 2657-5499 Hz. In each pass band, temporal envelope and periodicity cues (TEPCs) below 400 Hz were extracted by full wave rectification and low-pass filtering. The TEPCs were modulated by a sinusoidal carrier, the frequency of which was fundamental frequency (F0) and its harmonics most close to the center frequency of each band. Signals from each band were combined together to obtain an output speech. Mandarin tone, word, and sentence recognition in quiet listening conditions were tested for the extensively used continuous interleaved sampling (CIS) strategy and the novel F0-harmonic algorithm. Results found that the F0-harmonic algorithm performed consistently better than CIS strategy in Mandarin tone, word, and sentence recognition. In addition, sentence recognition rate was higher than word recognition rate, as a result of contextual information in the sentence. Moreover, tone 3 and 4 performed better than tone 1 and tone 2, due to the easily identified features of the former. In conclusion, the F0-harmonic algorithm could enhance tonal information in cochlear implant speech processing due to the use of harmonicity cues, thereby improving Mandarin tone, word, and sentence recognition. Further study will focus on the test of the F0-harmonic algorithm in noisy listening conditions.

  15. The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.

    PubMed

    Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei

    2018-07-01

    Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Improving Protein Fold Recognition by Deep Learning Networks.

    PubMed

    Jo, Taeho; Hou, Jie; Eickholt, Jesse; Cheng, Jianlin

    2015-12-04

    For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl's benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.

  17. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    NASA Astrophysics Data System (ADS)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  18. Infrared vehicle recognition using unsupervised feature learning based on K-feature

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Tan, Yihua; Xia, Haijiao; Tian, Jinwen

    2018-02-01

    Subject to the complex battlefield environment, it is difficult to establish a complete knowledge base in practical application of vehicle recognition algorithms. The infrared vehicle recognition is always difficult and challenging, which plays an important role in remote sensing. In this paper we propose a new unsupervised feature learning method based on K-feature to recognize vehicle in infrared images. First, we use the target detection algorithm which is based on the saliency to detect the initial image. Then, the unsupervised feature learning based on K-feature, which is generated by Kmeans clustering algorithm that extracted features by learning a visual dictionary from a large number of samples without label, is calculated to suppress the false alarm and improve the accuracy. Finally, the vehicle target recognition image is finished by some post-processing. Large numbers of experiments demonstrate that the proposed method has satisfy recognition effectiveness and robustness for vehicle recognition in infrared images under complex backgrounds, and it also improve the reliability of it.

  19. Wide-threat detection: recognition of adversarial missions and activity patterns in Empire Challenge 2009

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Shabarekh, Charlotte; Furjanic, Caitlin

    2011-06-01

    In this paper, we present results of adversarial activity recognition using data collected in the Empire Challenge (EC 09) exercise. The EC09 experiment provided an opportunity to evaluate our probabilistic spatiotemporal mission recognition algorithms using the data from live air-born and ground sensors. Using ambiguous and noisy data about locations of entities and motion events on the ground, the algorithms inferred the types and locations of OPFOR activities, including reconnaissance, cache runs, IED emplacements, logistics, and planning meetings. In this paper, we present detailed summary of the validation study and recognition accuracy results. Our algorithms were able to detect locations and types of over 75% of hostile activities in EC09 while producing 25% false alarms.

  20. An adaptive deep Q-learning strategy for handwritten digit recognition.

    PubMed

    Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min

    2018-02-22

    Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Appearance-based face recognition and light-fields.

    PubMed

    Gross, Ralph; Matthews, Iain; Baker, Simon

    2004-04-01

    Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.

  2. A Fault Recognition System for Gearboxes of Wind Turbines

    NASA Astrophysics Data System (ADS)

    Yang, Zhiling; Huang, Haiyue; Yin, Zidong

    2017-12-01

    Costs of maintenance and loss of power generation caused by the faults of wind turbines gearboxes are the main components of operation costs for a wind farm. Therefore, the technology of condition monitoring and fault recognition for wind turbines gearboxes is becoming a hot topic. A condition monitoring and fault recognition system (CMFRS) is presented for CBM of wind turbines gearboxes in this paper. The vibration signals from acceleration sensors at different locations of gearbox and the data from supervisory control and data acquisition (SCADA) system are collected to CMFRS. Then the feature extraction and optimization algorithm is applied to these operational data. Furthermore, to recognize the fault of gearboxes, the GSO-LSSVR algorithm is proposed, combining the least squares support vector regression machine (LSSVR) with the Glowworm Swarm Optimization (GSO) algorithm. Finally, the results show that the fault recognition system used in this paper has a high rate for identifying three states of wind turbines’ gears; besides, the combination of date features can affect the identifying rate and the selection optimization algorithm presented in this paper can get a pretty good date feature subset for the fault recognition.

  3. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in response to sleep loss. PMID:22959616

  4. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    PubMed

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  5. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  6. Dose reduction in abdominal computed tomography: intraindividual comparison of image quality of full-dose standard and half-dose iterative reconstructions with dual-source computed tomography.

    PubMed

    May, Matthias S; Wüst, Wolfgang; Brand, Michael; Stahl, Christian; Allmendinger, Thomas; Schmidt, Bernhard; Uder, Michael; Lell, Michael M

    2011-07-01

    We sought to evaluate the image quality of iterative reconstruction in image space (IRIS) in half-dose (HD) datasets compared with full-dose (FD) and HD filtered back projection (FBP) reconstruction in abdominal computed tomography (CT). To acquire data with FD and HD simultaneously, contrast-enhanced abdominal CT was performed with a dual-source CT system, both tubes operating at 120 kV, 100 ref.mAs, and pitch 0.8. Three different image datasets were reconstructed from the raw data: Standard FD images applying FBP which served as reference, HD images applying FBP and HD images applying IRIS. For the HD data sets, only data from 1 tube detector-system was used. Quantitative image quality analysis was performed by measuring image noise in tissue and air. Qualitative image quality was evaluated according to the European Guidelines on Quality criteria for CT. Additional assessment of artifacts, lesion conspicuity, and edge sharpness was performed. : Image noise in soft tissue was substantially decreased in HD-IRIS (-3.4 HU, -22%) and increased in HD-FBP (+6.2 HU, +39%) images when compared with the reference (mean noise, 15.9 HU). No significant differences between the FD-FBP and HD-IRIS images were found for the visually sharp anatomic reproduction, overall diagnostic acceptability (P = 0.923), lesion conspicuity (P = 0.592), and edge sharpness (P = 0.589), while HD-FBP was rated inferior. Streak artifacts and beam hardening was significantly more prominent in HD-FBP while HD-IRIS images exhibited a slightly different noise pattern. Direct intrapatient comparison of standard FD body protocols and HD-IRIS reconstruction suggest that the latest iterative reconstruction algorithms allow for approximately 50% dose reduction without deterioration of the high image quality necessary for confident diagnosis.

  7. Mars atmospheric dust properties: A synthesis of Mariner 9, Viking, and Phobos observations

    NASA Technical Reports Server (NTRS)

    Clancy, R. T.; Lee, S. W.; Gladstone, G. R.

    1993-01-01

    We have modified a doubling-and-adding code to reanalyze the Mariner 9 IRIS spectra of Mars atmospheric dust as well as Viking IRTM EPF sequences in the 7, 9, and 20 micron channels. The code is capable of accurate emission/ absorption/scattering radiative transfer calculations over the 5-30 micron wavelength region for variable dust composition and particle size inputs, and incorporates both the Viking IRTM channel weightings and the Mariner 9 IRIS wavelength resolution for direct comparisons to these datasets. We adopt atmospheric temperature profiles according to the algorithm of Martin (1986) in the case of the Viking IRTM comparisons, and obtained Mariner 9 IRIS temperature retrievals from the 15 micron CO2 band for the case of the IRIS comparisons. We consider palagonite as the primary alternative to the montmorillonite composition of Mars atmospheric dust, based on several considerations. Palagonite absorbs in the ultraviolet and visible wavelength region due to its Fe content. Palagonite is also, in principal, consistent with the observed lack of clays on the Mars surface. Furthermore, palagonite does not display strong, structured absorption near 20 microns as does montmorillonite (in conflict with the IRIS observations). We propose that a palagonite composition with particle sizes roughly one-half that of the Toon et al. (1977) determination provide a much improved model to Mars atmospheric dust. Since palagonite is a common weathering product of terrrestrial basalts, it would not be unreasonable for palagonite to be a major surface component for Mars. The lack of even a minor component of Al-rich clays on the surface of Mars could be consistent with a palagonite composition for Mars dust if the conditions for basalt weathering on Mars were sufficiently anhydrous. Variations in palagonite composition could also lead to the inability of the modeled palagonite to fit the details of the 9 micron absorbtion indicated by the IRIS observations.

  8. Sparse coding joint decision rule for ear print recognition

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Melaab, Djamel; Mekhalfi, Mohamed Lamine

    2016-09-01

    Human ear recognition has been promoted as a profitable biometric over the past few years. With respect to other modalities, such as the face and iris, that have undergone a significant investigation in the literature, ear pattern is relatively still uncommon. We put forth a sparse coding-induced decision-making for ear recognition. It jointly involves the reconstruction residuals and the respective reconstruction coefficients pertaining to the input features (co-occurrence of adjacent local binary patterns) for a further fusion. We particularly show that combining both components (i.e., the residuals as well as the coefficients) yields better outcomes than the case when either of them is deemed singly. The proposed method has been evaluated on two benchmark datasets, namely IITD1 (125 subject) and IITD2 (221 subjects). The recognition rates of the suggested scheme amount for 99.5% and 98.95% for both datasets, respectively, which suggest that our method decently stands out against reference state-of-the-art methodologies. Furthermore, experiments conclude that the presented scheme manifests a promising robustness under large-scale occlusion scenarios.

  9. Comparison of eye imaging pattern recognition using neural network

    NASA Astrophysics Data System (ADS)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  10. A Palmprint Recognition Algorithm Using Phase-Only Correlation

    NASA Astrophysics Data System (ADS)

    Ito, Koichi; Aoki, Takafumi; Nakajima, Hiroshi; Kobayashi, Koji; Higuchi, Tatsuo

    This paper presents a palmprint recognition algorithm using Phase-Only Correlation (POC). The use of phase components in 2D (two-dimensional) discrete Fourier transforms of palmprint images makes it possible to achieve highly robust image registration and matching. In the proposed algorithm, POC is used to align scaling, rotation and translation between two palmprint images, and evaluate similarity between them. Experimental evaluation using a palmprint image database clearly demonstrates efficient matching performance of the proposed algorithm.

  11. Simulation and performance of an artificial retina for 40 MHz track reconstruction

    DOE PAGES

    Abba, A.; Bedeschi, F.; Citterio, M.; ...

    2015-03-05

    We present the results of a detailed simulation of the artificial retina pattern-recognition algorithm, designed to reconstruct events with hundreds of charged-particle tracks in pixel and silicon detectors at LHCb with LHC crossing frequency of 40 MHz. Performances of the artificial retina algorithm are assessed using the official Monte Carlo samples of the LHCb experiment. We found performances for the retina pattern-recognition algorithm comparable with the full LHCb reconstruction algorithm.

  12. Reinterpretation of Mariner 9 IRIS data on the basis of a simulation of radiative-conductive convective transfer in the dust laden Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Pallman, A. J.

    1974-01-01

    Time dependent vertical distributions of atmospheric temperature and static stability were determined by a radiative-convective-conductive heat transfer model attuned to Mariner 9 IRIS radiance data. Of particular interest were conditions of both the dust-laden and dust-free atmosphere in the middle latitudes on Mars during the late S.H. summer season. The numerical model simulates at high spatial and temporal resolution (52 atmospheric and 30 subsurface levels; with a time-step of 7.5 min.) the heat transports in the ground-atmosphere system. The algorithm is based on the solution of the appropriate heating rate equation which includes radiative, molecular-conductive and convective heat transfer terms. Ground and atmosphere are coupled by an internal thermal boundary condition.

  13. DCL System Research Using Advanced Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2012-09-30

    recognition. Algorithm design and statistical analysis and feature analysis. Post -Doctoral Associate, Cornell University, Bioacoustics Research...short. The HPC-ADA was designed based on fielded systems [1-4, 6] that offer a variety of desirable attributes, specifically dynamic resource...The software package was designed to utilize parallel and distributed processing for running recognition and other advanced algorithms. DeLMA

  14. Using an Improved SIFT Algorithm and Fuzzy Closed-Loop Control Strategy for Object Recognition in Cluttered Scenes

    PubMed Central

    Nie, Haitao; Long, Kehui; Ma, Jun; Yue, Dan; Liu, Jinguo

    2015-01-01

    Partial occlusions, large pose variations, and extreme ambient illumination conditions generally cause the performance degradation of object recognition systems. Therefore, this paper presents a novel approach for fast and robust object recognition in cluttered scenes based on an improved scale invariant feature transform (SIFT) algorithm and a fuzzy closed-loop control method. First, a fast SIFT algorithm is proposed by classifying SIFT features into several clusters based on several attributes computed from the sub-orientation histogram (SOH), in the feature matching phase only features that share nearly the same corresponding attributes are compared. Second, a feature matching step is performed following a prioritized order based on the scale factor, which is calculated between the object image and the target object image, guaranteeing robust feature matching. Finally, a fuzzy closed-loop control strategy is applied to increase the accuracy of the object recognition and is essential for autonomous object manipulation process. Compared to the original SIFT algorithm for object recognition, the result of the proposed method shows that the number of SIFT features extracted from an object has a significant increase, and the computing speed of the object recognition processes increases by more than 40%. The experimental results confirmed that the proposed method performs effectively and accurately in cluttered scenes. PMID:25714094

  15. Application synergies between the NASA Pre- Aerosol Cloud and ocean Ecosystem (PACE) and Hyperspectral Infrared Imager (HyspIRI) missions

    NASA Astrophysics Data System (ADS)

    Lee, C. M.; Omar, A. H.; Hook, S. J.; Tzortziou, M.; Luvall, J. C.; Turner, W. W.

    2016-02-01

    Observations from the Pre-Aerosol Cloud and ocean Ecosystem (PACE) and Hyperspectral InfraRed Imager (HyspIRI) satellite missions are highly complementary and have the potential to significantly advance understanding of various science and applications challenges in the ocean sciences and water quality communities. Scheduled for launch in the 2022 timeframe, PACE is designed to make climate-quality global measurements essential for understanding ocean biology, biogeochemistry and ecology, and determining the role of the ocean in global biogeochemical cycling and ocean ecology, and how it affects and is affected by climate change. PACE will provide high signal-to-noise, hyperspectral observations over an extended spectral range (UV to SWIR) and will have global coverage every 1-2 days, at approximately 1 km spatial resolution; furthermore, PACE is currently designed to include a polarimeter, which will vastly improve atmospheric correction algorithms over water bodies. The PACE mission will enable advances in applications across a range of areas, including oceans, climate, water resources, ecological forecasting, disasters, human health and air quality. HyspIRI, with contiguous measurements in VSWIR, and multispectral measurements in TIR, will be able to provide detailed spectral observations and higher spatial resolution (30 to 60-m) over aquatic systems, but at a temporal resolution that is approximately 5-16 days. HyspIRI would enable improved, detailed studies of aquatic ecosystems, including benthic communities, algal blooms, coral reefs, and wetland species distribution as well as studies of water quality indicators or pollutants such as oil spills, suspended sediment, and colored dissolved organic matter. Together, PACE and HyspIRI will be able to address numerous applications and science priorities, including improving and extending climate data records, and studies of inland, coastal and ocean environments.

  16. False match elimination for face recognition based on SIFT algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Xuyuan; Shi, Ping; Shao, Meide

    2011-06-01

    The SIFT (Scale Invariant Feature Transform) is a well known algorithm used to detect and describe local features in images. It is invariant to image scale, rotation and robust to the noise and illumination. In this paper, a novel method used for face recognition based on SIFT is proposed, which combines the optimization of SIFT, mutual matching and Progressive Sample Consensus (PROSAC) together and can eliminate the false matches of face recognition effectively. Experiments on ORL face database show that many false matches can be eliminated and better recognition rate is achieved.

  17. Beat-ID: Towards a computationally low-cost single heartbeat biometric identity check system based on electrocardiogram wave morphology

    PubMed Central

    Paiva, Joana S.; Dias, Duarte

    2017-01-01

    In recent years, safer and more reliable biometric methods have been developed. Apart from the need for enhanced security, the media and entertainment sectors have also been applying biometrics in the emerging market of user-adaptable objects/systems to make these systems more user-friendly. However, the complexity of some state-of-the-art biometric systems (e.g., iris recognition) or their high false rejection rate (e.g., fingerprint recognition) is neither compatible with the simple hardware architecture required by reduced-size devices nor the new trend of implementing smart objects within the dynamic market of the Internet of Things (IoT). It was recently shown that an individual can be recognized by extracting features from their electrocardiogram (ECG). However, most current ECG-based biometric algorithms are computationally demanding and/or rely on relatively large (several seconds) ECG samples, which are incompatible with the aforementioned application fields. Here, we present a computationally low-cost method (patent pending), including simple mathematical operations, for identifying a person using only three ECG morphology-based characteristics from a single heartbeat. The algorithm was trained/tested using ECG signals of different duration from the Physionet database on more than 60 different training/test datasets. The proposed method achieved maximal averaged accuracy of 97.450% in distinguishing each subject from a ten-subject set and false acceptance and rejection rates (FAR and FRR) of 5.710±1.900% and 3.440±1.980%, respectively, placing Beat-ID in a very competitive position in terms of the FRR/FAR among state-of-the-art methods. Furthermore, the proposed method can identify a person using an average of 1.020 heartbeats. It therefore has FRR/FAR behavior similar to obtaining a fingerprint, yet it is simpler and requires less expensive hardware. This method targets low-computational/energy-cost scenarios, such as tiny wearable devices (e.g., a smart object that automatically adapts its configuration to the user). A hardware proof-of-concept implementation is presented as an annex to this paper. PMID:28719614

  18. Beat-ID: Towards a computationally low-cost single heartbeat biometric identity check system based on electrocardiogram wave morphology.

    PubMed

    Paiva, Joana S; Dias, Duarte; Cunha, João P S

    2017-01-01

    In recent years, safer and more reliable biometric methods have been developed. Apart from the need for enhanced security, the media and entertainment sectors have also been applying biometrics in the emerging market of user-adaptable objects/systems to make these systems more user-friendly. However, the complexity of some state-of-the-art biometric systems (e.g., iris recognition) or their high false rejection rate (e.g., fingerprint recognition) is neither compatible with the simple hardware architecture required by reduced-size devices nor the new trend of implementing smart objects within the dynamic market of the Internet of Things (IoT). It was recently shown that an individual can be recognized by extracting features from their electrocardiogram (ECG). However, most current ECG-based biometric algorithms are computationally demanding and/or rely on relatively large (several seconds) ECG samples, which are incompatible with the aforementioned application fields. Here, we present a computationally low-cost method (patent pending), including simple mathematical operations, for identifying a person using only three ECG morphology-based characteristics from a single heartbeat. The algorithm was trained/tested using ECG signals of different duration from the Physionet database on more than 60 different training/test datasets. The proposed method achieved maximal averaged accuracy of 97.450% in distinguishing each subject from a ten-subject set and false acceptance and rejection rates (FAR and FRR) of 5.710±1.900% and 3.440±1.980%, respectively, placing Beat-ID in a very competitive position in terms of the FRR/FAR among state-of-the-art methods. Furthermore, the proposed method can identify a person using an average of 1.020 heartbeats. It therefore has FRR/FAR behavior similar to obtaining a fingerprint, yet it is simpler and requires less expensive hardware. This method targets low-computational/energy-cost scenarios, such as tiny wearable devices (e.g., a smart object that automatically adapts its configuration to the user). A hardware proof-of-concept implementation is presented as an annex to this paper.

  19. Personal identification by eyes.

    PubMed

    Marinović, Dunja; Njirić, Sanja; Coklo, Miran; Muzić, Vedrana

    2011-09-01

    Identification of persons through the eyes is in the field of biometrical science. Many security systems are based on biometric methods of personal identification, to determine whether a person is presenting itself truly. The human eye contains an extremely large number of individual characteristics that make it particularly suitable for the process of identifying a person. Today, the eye is considered to be one of the most reliable body parts for human identification. Systems using iris recognition are among the most secure biometric systems.

  20. Implementation and preliminary evaluation of 'C-tone': A novel algorithm to improve lexical tone recognition in Mandarin-speaking cochlear implant users.

    PubMed

    Ping, Lichuan; Wang, Ningyuan; Tang, Guofang; Lu, Thomas; Yin, Li; Tu, Wenhe; Fu, Qian-Jie

    2017-09-01

    Because of limited spectral resolution, Mandarin-speaking cochlear implant (CI) users have difficulty perceiving fundamental frequency (F0) cues that are important to lexical tone recognition. To improve Mandarin tone recognition in CI users, we implemented and evaluated a novel real-time algorithm (C-tone) to enhance the amplitude contour, which is strongly correlated with the F0 contour. The C-tone algorithm was implemented in clinical processors and evaluated in eight users of the Nurotron NSP-60 CI system. Subjects were given 2 weeks of experience with C-tone. Recognition of Chinese tones, monosyllables, and disyllables in quiet was measured with and without the C-tone algorithm. Subjective quality ratings were also obtained for C-tone. After 2 weeks of experience with C-tone, there were small but significant improvements in recognition of lexical tones, monosyllables, and disyllables (P < 0.05 in all cases). Among lexical tones, the largest improvements were observed for Tone 3 (falling-rising) and the smallest for Tone 4 (falling). Improvements with C-tone were greater for disyllables than for monosyllables. Subjective quality ratings showed no strong preference for or against C-tone, except for perception of own voice, where C-tone was preferred. The real-time C-tone algorithm provided small but significant improvements for speech performance in quiet with no change in sound quality. Pre-processing algorithms to reduce noise and better real-time F0 extraction would improve the benefits of C-tone in complex listening environments. Chinese CI users' speech recognition in quiet can be significantly improved by modifying the amplitude contour to better resemble the F0 contour.

  1. Neural system for heartbeats recognition using genetically integrated ensemble of classifiers.

    PubMed

    Osowski, Stanislaw; Siwek, Krzysztof; Siroic, Robert

    2011-03-01

    This paper presents the application of genetic algorithm for the integration of neural classifiers combined in the ensemble for the accurate recognition of heartbeat types on the basis of ECG registration. The idea presented in this paper is that using many classifiers arranged in the form of ensemble leads to the increased accuracy of the recognition. In such ensemble the important problem is the integration of all classifiers into one effective classification system. This paper proposes the use of genetic algorithm. It was shown that application of the genetic algorithm is very efficient and allows to reduce significantly the total error of heartbeat recognition. This was confirmed by the numerical experiments performed on the MIT BIH Arrhythmia Database. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Virtual cohorts and face-to-face recruitment: Strategies for cultivating the next generation of the IRIS Community

    NASA Astrophysics Data System (ADS)

    Hubenthal, M.; Wysession, M. E.; Aster, R. C.

    2009-12-01

    Since 1998, the IRIS Consortium REU program has facilitated research opportunities and career development for 71 undergraduate students to work with leaders in seismological research, travel to exciting locations for fieldwork, and engage in significant research for presentation and recognition at major professional conferences. A principal program goal is to encourage more students, representing a more diverse population, to choose careers in Earth science. Of the forty-six internship alumni that have completed their undergraduate degrees thus far, 85% have attained or are currently pursuing a graduate degree in a geoscience field and an additional 6% are working in a geoscience career with an undergraduate degree. The IRIS Consortium’s program differs from traditional REUs in that students are hosted at IRIS member institutions that are geographically distributed. To capture the sprit of a traditional REU cohort, IRIS has developed and refined a model that bonds students into a cohort. Key to the model are: a) research projects that have a common focus within seismology, b) a weeklong orientation where students get to know one another, share common experiences and establish a “social presence” with the other interns, c) a cyber infrastructure to maintain their connectedness in a way that enables both learning and collaboration, d) an alumni mentor that supports the interns and serves both as a role model and an unbiased and experienced third-party to the mentor/mentee relationship, and e) an alumni reception, and scientific presentation, at the annual Fall AGU Meeting to reconnect and share experiences. Through their virtual community interns offer each other assistance, share ideas, ask questions, and relate life experiences while conducting their own unique research. In addition to developing a model for encouraging virtual cohorts, IRIS has also carefully examined recruitment strategies to increase and diversify the applicant pool. Based on applicant surveys we believe that the best method to advertise REU programs has shifted away from the traditional and expensive hardcopy fliers tacked to bulletin board in the halls of science departments. Instead we have found that the two most common methods for students to learn about the program were by visiting the IRIS website to view video clips or slideshow presentations and via personal notification/encouragement from faculty or staff at their institutions. The importance of personal notification was even more pronounced for applicants from minority serving institutions. Given the importance both the web and faculty advising in encouraging students to apply, the IRIS REU has adopted the following three recruitment strategies: 1) Engage students through the website by providing access to traditional text and photos, narrated video clips and other media, as well as links to previous intern’s blogs, 2) Empower and encourage faculty to recruit students by providing resources for easy use in classes such as annotated slideshows and narrated videos, and 3) Reach out to minority students personally through a speaker series featuring minority alumni of the IRIS REU program.

  3. Real-time polarization imaging algorithm for camera-based polarization navigation sensors.

    PubMed

    Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli

    2017-04-10

    Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.

  4. Image-algebraic design of multispectral target recognition algorithms

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    1994-06-01

    In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.

  5. Improving Automated Endmember Identification for Linear Unmixing of HyspIRI Spectral Data.

    NASA Astrophysics Data System (ADS)

    Gader, P.

    2016-12-01

    The size of data sets produced by imaging spectrometers is increasing rapidly. There is already a processing bottleneck. Part of the reason for this bottleneck is the need for expert input using interactive software tools. This process can be very time consuming and laborious but is currently crucial to ensuring the quality of the analysis. Automated algorithms can mitigate this problem. Although it is unlikely that processing systems can become completely automated, there is an urgent need to increase the level of automation. Spectral unmixing is a key component to processing HyspIRI data. Algorithms such as MESMA have been demonstrated to achieve results but require carefully, expert construction of endmember libraries. Unfortunately, many endmembers found by automated algorithms for finding endmembers are deemed unsuitable by experts because they are not physically reasonable. Unfortunately, endmembers that are not physically reasonable can achieve very low errors between the linear mixing model with those endmembers and the original data. Therefore, this error is not a reasonable way to resolve the problem on "non-physical" endmembers. There are many potential approaches for resolving these issues, including using Bayesian priors, but very little attention has been given to this problem. The study reported on here considers a modification of the Sparsity Promoting Iterated Constrained Endmember (SPICE) algorithm. SPICE finds endmembers and abundances and estimates the number of endmembers. The SPICE algorithm seeks to minimize a quadratic objective function with respect to endmembers E and fractions P. The modified SPICE algorithm, which we refer to as SPICED, is obtained by adding the term D to the objective function. The term D pressures the algorithm to minimize sum of the squared differences between each endmember and a weighted sum of the data. By appropriately modifying the, the endmembers are pushed towards a subset of the data with the potential for becoming exactly equal to the data points. The algorithm has been applied to spectral data and the differences between the endmembers resulting from ecorded. The results so far are that the endmembers found SPICED are approximately 25% closer to the data with indistinguishable reconstruction error compared to those found using SPICE.

  6. Improved Hip-Based Individual Recognition Using Wearable Motion Recording Sensor

    NASA Astrophysics Data System (ADS)

    Gafurov, Davrondzhon; Bours, Patrick

    In todays society the demand for reliable verification of a user identity is increasing. Although biometric technologies based on fingerprint or iris can provide accurate and reliable recognition performance, they are inconvenient for periodic or frequent re-verification. In this paper we propose a hip-based user recognition method which can be suitable for implicit and periodic re-verification of the identity. In our approach we use a wearable accelerometer sensor attached to the hip of the person, and then the measured hip motion signal is analysed for identity verification purposes. The main analyses steps consists of detecting gait cycles in the signal and matching two sets of detected gait cycles. Evaluating the approach on a hip data set consisting of 400 gait sequences (samples) from 100 subjects, we obtained equal error rate (EER) of 7.5% and identification rate at rank 1 was 81.4%. These numbers are improvements by 37.5% and 11.2% respectively of the previous study using the same data set.

  7. Approximated mutual information training for speech recognition using myoelectric signals.

    PubMed

    Guo, Hua J; Chan, A D C

    2006-01-01

    A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.

  8. a Review on State-Of Face Recognition Approaches

    NASA Astrophysics Data System (ADS)

    Mahmood, Zahid; Muhammad, Nazeer; Bibi, Nargis; Ali, Tauseef

    Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.

  9. The Effect of Decomposition on the Efficacy of Biometrics for Positive Identification.

    PubMed

    Sauerwein, Kelly; Saul, Tiffany B; Steadman, Dawnie Wolfe; Boehnen, Chris B

    2017-11-01

    Biometrics, unique measurable physiological and behavioral characteristics, are used to identify individuals in a variety of scenarios, including forensic investigations. However, data on the longevity of these indicators are incomplete. This study demonstrated that iris and fingerprint biometric data can be obtained up to four days postmortem in warmer seasons and 50 + days in the winter. It has been generally believed, but never studied, that iris recognition is only obtainable within the first 24 hours after death. However, this study showed that they remain viable for longer (2-34 days) depending upon the environmental conditions. Temperature, precipitation, insects, and scavenger activity were the primary factors affecting the retention of biometrics in decomposing human remains. While this study is an initial step in determining the utility of physiological biometrics across postmortem time, biometric research has the potential to make important contributions to human identification and the law enforcement, military, and medicolegal communities. © 2017 American Academy of Forensic Sciences.

  10. Excimer Laser Surgery: Biometrical Iris Eye Recognition with Cyclorotational Control Eye Tracker System.

    PubMed

    Pajic, Bojan; Cvejic, Zeljka; Mijatovic, Zoran; Indjin, Dragan; Mueller, Joerg

    2017-05-25

    A prospective comparative study assessing the importance of the intra-operative dynamic rotational tracking-especially in the treatment of astigmatisms in corneal refractive Excimer laser correction-concerning clinical outcomes is presented. The cyclotorsion from upright to supine position was measured using iris image comparison. The Group 1 of patients was additionally treated with cyclorotational control and Group 2 only with X-Y control. Significant differences were observed between the groups regarding the mean postoperative cylinder refraction ( p < 0.05). The mean cyclotorsion can be calculated to 3.75° with a standard deviation of 3.1°. The total range of torsion was from -14.9° to +12.6°. Re-treatment rate was 2.2% in Group 1 and 8.2% in Group 2, which is highly significant ( p < 0.01). The investigation confirms that the dynamic rotational tracking system used for LASIK results in highly predictable refraction quality with significantly less postoperative re-treatments.

  11. Excimer Laser Surgery: Biometrical Iris Eye Recognition with Cyclorotational Control Eye Tracker System

    PubMed Central

    Pajic, Bojan; Cvejic, Zeljka; Mijatovic, Zoran; Indjin, Dragan; Mueller, Joerg

    2017-01-01

    A prospective comparative study assessing the importance of the intra-operative dynamic rotational tracking—especially in the treatment of astigmatisms in corneal refractive Excimer laser correction—concerning clinical outcomes is presented. The cyclotorsion from upright to supine position was measured using iris image comparison. The Group 1 of patients was additionally treated with cyclorotational control and Group 2 only with X-Y control. Significant differences were observed between the groups regarding the mean postoperative cylinder refraction (p < 0.05). The mean cyclotorsion can be calculated to 3.75° with a standard deviation of 3.1°. The total range of torsion was from −14.9° to +12.6°. Re-treatment rate was 2.2% in Group 1 and 8.2% in Group 2, which is highly significant (p < 0.01). The investigation confirms that the dynamic rotational tracking system used for LASIK results in highly predictable refraction quality with significantly less postoperative re-treatments. PMID:28587100

  12. Toward open set recognition.

    PubMed

    Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E

    2013-07-01

    To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.

  13. Upgrading CCIR's fo F 2 maps using available ionosondes and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gularte, Erika; Carpintero, Daniel D.; Jaen, Juliana

    2018-04-01

    We have developed a new approach towards a new database of the ionospheric parameter fo F 2 . This parameter, being the frequency of the maximum of the ionospheric electronic density profile and its main modeller, is of great interest not only in atmospheric studies but also in the realm of radio propagation. The current databases, generated by CCIR (Committee Consultative for Ionospheric Radiowave propagation) and URSI (International Union of Radio Science), and used by the IRI (International Reference Ionosphere) model, are based on Fourier expansions and have been built in the 60s from the available ionosondes at that time. The main goal of this work is to upgrade the databases by using new available ionosonde data. To this end we used the IRI diurnal/spherical expansions to represent the fo F 2 variability, and computed its coefficients by means of a genetic algorithm (GA). In order to test the performance of the proposed methodology, we applied it to the South American region with data obtained by RAPEAS (Red Argentina para el Estudio de la Atmósfera Superior, i.e. Argentine Network for the Study of the Upper Atmosphere) during the years 1958-2009. The new GA coefficients provide a global better fit of the IRI model to the observed fo F 2 than the CCIR coefficients. Since the same formulae and the same number of coefficients were used, the overall integrity of IRI's typical ionospheric feature representation was preserved. The best improvements with respect to CCIR are obtained at low solar activities, at large (in absolute value) modip latitudes, and at night-time. The new method is flexible in the sense that can be applied either globally or regionally. It is also very easy to recompute the coefficients when new data is available. The computation of a third set of coefficients corresponding to days of medium solar activity in order to avoid the interpolation between low and high activities is suggested. The same procedure as for fo F 2 can be perfomed to obtain the ionospheric parameter M(3000)F2.

  14. 80-kVp CT Using Iterative Reconstruction in Image Space Algorithm for the Detection of Hypervascular Hepatocellular Carcinoma: Phantom and Initial Clinical Experience

    PubMed Central

    Hur, Saebeom; Kim, Soo Jin; Park, Ji Hoon; Han, Joon Koo; Choi, Byung Ihn

    2012-01-01

    Objective To investigate whether the low-tube-voltage (80-kVp), intermediate-tube-current (340-mAs) MDCT using the Iterative Reconstruction in Image Space (IRIS) algorithm improves lesion-to-liver contrast at reduced radiation dosage while maintaining acceptable image noise in the detection of hepatocellular carcinomas (HCC) in thin (mean body mass index, 24 ± 0.4 kg/m2) adults. Subjects and Methods A phantom simulating the liver with HCC was scanned at 50-400 mAs for 80, 100, 120 and 140-kVp. In addition, fifty patients with HCC who underwent multiphasic liver CT using dual-energy (80-kVp and 140-kVp) arterial scans were enrolled. Virtual 120-kVP scans (protocol A) and 80-kVp scans (protocol B) of the late arterial phase were reconstructed with filtered back-projection (FBP), while corresponding 80-kVp scans were reconstructed with IRIS (protocol C). Contrast-to-noise ratio (CNR) of HCCs and abdominal organs were assessed quantitatively, whereas lesion conspicuity, image noise, and overall image quality were assessed qualitatively. Results IRIS effectively reduced image noise, and yielded 29% higher CNR than the FBP at equivalent tube voltage and current in the phantom study. In the quantitative patient study, protocol C helped improve CNR by 51% and 172% than protocols A and B (p < 0.001), respectively, at equivalent radiation dosage. In the qualitative study, protocol C acquired the highest score for lesion conspicuity albeit with an inferior score to protocol A for overall image quality (p < 0.001). Mean effective dose was 2.63-mSv with protocol A and 1.12-mSv with protocols B and C. Conclusion CT using the low-tube-voltage, intermediate-tube-current and IRIS help improve lesion-to-liver CNR of HCC in thin adults during the arterial phase at a lower radiation dose when compared with the standard technique using 120-kVp and FBP. PMID:22438682

  15. Automated facial acne assessment from smartphone images

    NASA Astrophysics Data System (ADS)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  16. A Neutral Network based Early Eathquake Warning model in California region

    NASA Astrophysics Data System (ADS)

    Xiao, H.; MacAyeal, D. R.

    2016-12-01

    Early Earthquake Warning systems could reduce loss of lives and other economic impact resulted from natural disaster or man-made calamity. Current systems could be further enhanced by neutral network method. A 3 layer neural network model combined with onsite method was deployed in this paper to improve the recognition time and detection time for large scale earthquakes.The 3 layer neutral network early earthquake warning model adopted the vector feature design for sample events happened within 150 km radius of the epicenters. Dataset used in this paper contained both destructive events and small scale events. All the data was extracted from IRIS database to properly train the model. In the training process, backpropagation algorithm was used to adjust the weight matrices and bias matrices during each iteration. The information in all three channels of the seismometers served as the source in this model. Through designed tests, it was indicated that this model could identify approximately 90 percent of the events' scale correctly. And the early detection could provide informative evidence for public authorities to make further decisions. This indicated that neutral network model could have the potential to strengthen current early warning system, since the onsite method may greatly reduce the responding time and save more lives in such disasters.

  17. Object Recognition and Localization: The Role of Tactile Sensors

    PubMed Central

    Aggarwal, Achint; Kirchner, Frank

    2014-01-01

    Tactile sensors, because of their intrinsic insensitivity to lighting conditions and water turbidity, provide promising opportunities for augmenting the capabilities of vision sensors in applications involving object recognition and localization. This paper presents two approaches for haptic object recognition and localization for ground and underwater environments. The first approach called Batch Ransac and Iterative Closest Point augmented Particle Filter (BRICPPF) is based on an innovative combination of particle filters, Iterative-Closest-Point algorithm, and a feature-based Random Sampling and Consensus (RANSAC) algorithm for database matching. It can handle a large database of 3D-objects of complex shapes and performs a complete six-degree-of-freedom localization of static objects. The algorithms are validated by experimentation in ground and underwater environments using real hardware. To our knowledge this is the first instance of haptic object recognition and localization in underwater environments. The second approach is biologically inspired, and provides a close integration between exploration and recognition. An edge following exploration strategy is developed that receives feedback from the current state of recognition. A recognition by parts approach is developed which uses the BRICPPF for object sub-part recognition. Object exploration is either directed to explore a part until it is successfully recognized, or is directed towards new parts to endorse the current recognition belief. This approach is validated by simulation experiments. PMID:24553087

  18. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  19. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  20. Food Quality and Phytoplankton Community Composition in San Francisco Bay using Imaging Spectroscopy Data from the California HyspIRI Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Palacios, S. L.; Peacock, M. B.; Golini, A. N.; Cloern, J. E.; Senn, D. B.; Guild, L. S.; Kudela, R. M.

    2016-12-01

    The San Francisco Bay (SFB) is the largest estuary on the west coast of the United States. It is an important transition zone between marine, freshwater, and inland terrestrial watersheds. The SFB is an important region for the cycling of nutrients and pollutants and it supports nurseries of ecologically and commercially important fisheries, including some threatened species. Phytoplankton community structure influences food web dynamics, and the taxonomy of the phytoplankton may be more important in determining primary "food quality" than environmental factors. As such, estimating food quality from phytoplankton community composition can be a robust tool to understand trophic transfer of energy. Recent work explores phytoplankton "food quality" in SFB through the use of microscopy and phytoplankton chemotaxonomy to evaluate how changes in phytoplankton composition may have influenced the recent trophic collapse of pelagic fishes in the northern part of the SFB. The objective of this study is to determine if the approach can also be applied to imaging spectroscopy data in order to quantify phytoplankton "food quality" from space. Imaging spectroscopy data of SFB from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) was collected during the Hyperspectral Infrared (HyspIRI) Airborne Campaign in California (2013 - 2015) and used in this study. Estimates of ocean chlorophyll and phytoplankton community structure were determined using standard ocean chlorophyll algorithms and the PHYtoplankton Detection with Optics (PHYDOTax) algorithms. These were validated using in situ observations of phytoplankton composition using microscopic cell counts and phytoplankton chemotaxonomy from the US Geological Survey's ship surveys of the SFB. The findings from this study may inform the use of future high spectral resolution satellite sensors with the spatial resolution appropriate for coastal systems (e.g., HyspIRI) to assess "food quality" from space.

  1. Information Product Development: Data product life cycle links engineering, science, and applications

    NASA Astrophysics Data System (ADS)

    Stavros, E. N.; Owen, S. E.

    2016-12-01

    Information products are assimilated and used to: a) conduct scientific research and b) provide decision support for management and policy. For example, aboveground biomass (i.e. an information product) can be integrated into Earth system models to test hypotheses about the changing world, or used to inform decision-making with respect to natural resource management and policy. Production and dissemination of an information product is referred to as the data product life cycle, which includes: 1) identifying needed information from decision-makers and researchers, 2) engineering an instrument and collecting the raw physical measurements (e.g, number of photons returned), 3) the scientific algorithm(s) for processing the data into an observable (e.g., number of dying trees), and 4) the integration and utilization of that observables by researchers and decision-makers. In this talk, I will discuss the data product life cycle in detail and provide examples from the pre-Hyperspectral Infrared Imager (HyspIRI) airborne campaign and the upcoming NASA-ISRO Synthetic Aperture Radar (NISAR) mission. Examples will focus on information products related to terrestrial ecosystems and natural resource management and will demonstrate that the key to providing information products for advancing scientific understanding and informing decision-makers, is the interdisciplinary integration of science, engineering and applied science - noting that applied science defines the wider impact and adoption of scientific principles by a wider community. As pre-HyspIRI airborne data is for research and development and NISAR is not yet launched, examples will include current plans for developing exemplar data products (from pre-HyspIRI) and the mission Applications Plan (for NISAR). Copyright 2016 California Institute of Technology. All Rights Reserved. We acknowledge support of the US Government, NASA, the Earth Science Division and Terrestrial Ecology program.

  2. Design and performance evaluation of a high resolution IRI-microPET preclinical scanner

    NASA Astrophysics Data System (ADS)

    Islami rad, S. Z.; Peyvandi, R. Gholipour; lehdarboni, M. Askari; Ghafari, A. A.

    2015-05-01

    PET for small animal, IRI-microPET, was designed and built at the NSTRI. The scanner is made of four detectors positioned on a rotating gantry at a distance 50 mm from the center. Each detector consists of a 10×10 crystal matrix of 2×2×10 mm3 directly coupled to a PS-PMT. A position encoding circuit for specific PS-PMT has been designed, built and tested with a PD-MFS-2MS/s-8/14 data acquisition board. After implementing reconstruction algorithms (FBP, MLEM and SART) on sinograms, images quality and system performance were evaluated by energy resolution, timing resolution, spatial resolution, scatter fraction, sensitivity, RMS contrast and SNR parameters. The energy spectra were obtained for the crystals with an energy window of 300-700 keV. The energy resolution in 511 keV averaged over all modules, detectors, and crystals, was 23.5%. A timing resolution of 2.4 ns FWHM obtained by coincidence timing spectrum was measured with crystal LYSO. The radial and tangential resolutions for 18F (1.15-mm inner diameter) at the center of the field of view were 1.81 mm and 1.90 mm, respectively. At a radial offset of 5 mm, the FWHM values were 1.96 and 2.06 mm. The system scatter fraction was 7.1% for the mouse phantom. The sensitivity was measured for different energy windows, leading to a sensitivity of 1.74% at the center of FOV. Also, images quality was evaluated by RMS contrast and SNR factors, and the results show that the reconstructed images by MLEM algorithm have the best RMS contrast, and SNR. The IRI-microPET presents high image resolution, low scatter fraction values and improved SNR for animal studies.

  3. Theoretical Aspects of the Patterns Recognition Statistical Theory Used for Developing the Diagnosis Algorithms for Complicated Technical Systems

    NASA Astrophysics Data System (ADS)

    Obozov, A. A.; Serpik, I. N.; Mihalchenko, G. S.; Fedyaeva, G. A.

    2017-01-01

    In the article, the problem of application of the pattern recognition (a relatively young area of engineering cybernetics) for analysis of complicated technical systems is examined. It is shown that the application of a statistical approach for hard distinguishable situations could be the most effective. The different recognition algorithms are based on Bayes approach, which estimates posteriori probabilities of a certain event and an assumed error. Application of the statistical approach to pattern recognition is possible for solving the problem of technical diagnosis complicated systems and particularly big powered marine diesel engines.

  4. Automatic speech recognition research at NASA-Ames Research Center

    NASA Technical Reports Server (NTRS)

    Coler, Clayton R.; Plummer, Robert P.; Huff, Edward M.; Hitchcock, Myron H.

    1977-01-01

    A trainable acoustic pattern recognizer manufactured by Scope Electronics is presented. The voice command system VCS encodes speech by sampling 16 bandpass filters with center frequencies in the range from 200 to 5000 Hz. Variations in speaking rate are compensated for by a compression algorithm that subdivides each utterance into eight subintervals in such a way that the amount of spectral change within each subinterval is the same. The recorded filter values within each subinterval are then reduced to a 15-bit representation, giving a 120-bit encoding for each utterance. The VCS incorporates a simple recognition algorithm that utilizes five training samples of each word in a vocabulary of up to 24 words. The recognition rate of approximately 85 percent correct for untrained speakers and 94 percent correct for trained speakers was not considered adequate for flight systems use. Therefore, the built-in recognition algorithm was disabled, and the VCS was modified to transmit 120-bit encodings to an external computer for recognition.

  5. Mandarin Chinese Tone Identification in Cochlear Implants: Predictions from Acoustic Models

    PubMed Central

    Morton, Kenneth D.; Torrione, Peter A.; Throckmorton, Chandra S.; Collins, Leslie M.

    2015-01-01

    It has been established that current cochlear implants do not supply adequate spectral information for perception of tonal languages. Comprehension of a tonal language, such as Mandarin Chinese, requires recognition of lexical tones. New strategies of cochlear stimulation such as variable stimulation rate and current steering may provide the means of delivering more spectral information and thus may provide the auditory fine structure required for tone recognition. Several cochlear implant signal processing strategies are examined in this study, the continuous interleaved sampling (CIS) algorithm, the frequency amplitude modulation encoding (FAME) algorithm, and the multiple carrier frequency algorithm (MCFA). These strategies provide different types and amounts of spectral information. Pattern recognition techniques can be applied to data from Mandarin Chinese tone recognition tasks using acoustic models as a means of testing the abilities of these algorithms to transmit the changes in fundamental frequency indicative of the four lexical tones. The ability of processed Mandarin Chinese tones to be correctly classified may predict trends in the effectiveness of different signal processing algorithms in cochlear implants. The proposed techniques can predict trends in performance of the signal processing techniques in quiet conditions but fail to do so in noise. PMID:18706497

  6. HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation

    NASA Astrophysics Data System (ADS)

    Guo, Shuhang; Wang, Jian; Wang, Tong

    2017-09-01

    Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.

  7. Online graphic symbol recognition using neural network and ARG matching

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Li, Changhua; Xie, Weixing

    2001-09-01

    This paper proposes a novel method for on-line recognition of line-based graphic symbol. The input strokes are usually warped into a cursive form due to the sundry drawing style, and classifying them is very difficult. To deal with this, an ART-2 neural network is used to classify the input strokes. It has the advantages of high recognition rate, less recognition time and forming classes in a self-organized manner. The symbol recognition is achieved by an Attribute Relational Graph (ARG) matching algorithm. The ARG is very efficient for representing complex objects, but computation cost is very high. To over come this, we suggest a fast graph matching algorithm using symbol structure information. The experimental results show that the proposed method is effective for recognition of symbols with hierarchical structure.

  8. Infrared photothermal imaging spectroscopy for detection of trace explosives on surfaces.

    PubMed

    Kendziora, Christopher A; Furstenberg, Robert; Papantonakis, Michael; Nguyen, Viet; Byers, Jeff; Andrew McGill, R

    2015-11-01

    We are developing a technique for the standoff detection of trace explosives on relevant substrate surfaces using photothermal infrared (IR) imaging spectroscopy (PT-IRIS). This approach leverages one or more compact IR quantum cascade lasers, which are tuned to strong absorption bands in the analytes and directed to illuminate an area on a surface of interest. An IR focal plane array is used to image the surface and detect increases in thermal emission upon laser illumination. The PT-IRIS signal is processed as a hyperspectral image cube comprised of spatial, spectral, and temporal dimensions as vectors within a detection algorithm. The ability to detect trace analytes at standoff on relevant substrates is critical for security applications but is complicated by the optical and thermal analyte/substrate interactions. This manuscript describes a series of PT-IRIS experimental results and analysis for traces of RDX, TNT, ammonium nitrate, and sucrose on steel, polyethylene, glass, and painted steel panels. We demonstrate detection at surface mass loadings comparable with fingerprint depositions ( 10μg/cm2 to 100μg/cm2) from an area corresponding to a single pixel within the thermal image.

  9. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    NASA Astrophysics Data System (ADS)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  10. Container-code recognition system based on computer vision and deep neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Tianjian; Jiang, Li; Liang, Xiaoyao

    2018-04-01

    Automatic container-code recognition system becomes a crucial requirement for ship transportation industry in recent years. In this paper, an automatic container-code recognition system based on computer vision and deep neural networks is proposed. The system consists of two modules, detection module and recognition module. The detection module applies both algorithms based on computer vision and neural networks, and generates a better detection result through combination to avoid the drawbacks of the two methods. The combined detection results are also collected for online training of the neural networks. The recognition module exploits both character segmentation and end-to-end recognition, and outputs the recognition result which passes the verification. When the recognition module generates false recognition, the result will be corrected and collected for online training of the end-to-end recognition sub-module. By combining several algorithms, the system is able to deal with more situations, and the online training mechanism can improve the performance of the neural networks at runtime. The proposed system is able to achieve 93% of overall recognition accuracy.

  11. Automatic Classification of Station Quality by Image Based Pattern Recognition of Ppsd Plots

    NASA Astrophysics Data System (ADS)

    Weber, B.; Herrnkind, S.

    2017-12-01

    The number of seismic stations is growing and it became common practice to share station waveform data in real-time with the main data centers as IRIS, GEOFON, ORFEUS and RESIF. This made analyzing station performance of increasing importance for automatic real-time processing and station selection. The value of a station depends on different factors as quality and quantity of the data, location of the site and general station density in the surrounding area and finally the type of application it can be used for. The approach described by McNamara and Boaz (2006) became standard in the last decade. It incorporates a probability density function (PDF) to display the distribution of seismic power spectral density (PSD). The low noise model (LNM) and high noise model (HNM) introduced by Peterson (1993) are also displayed in the PPSD plots introduced by McNamara and Boaz allowing an estimation of the station quality. Here we describe how we established an automatic station quality classification module using image based pattern recognition on PPSD plots. The plots were split into 4 bands: short-period characteristics (0.1-0.8 s), body wave characteristics (0.8-5 s), microseismic characteristics (5-12 s) and long-period characteristics (12-100 s). The module sqeval connects to a SeedLink server, checks available stations, requests PPSD plots through the Mustang service from IRIS or PQLX/SQLX or from GIS (gempa Image Server), a module to generate different kind of images as trace plots, map plots, helicorder plots or PPSD plots. It compares the image based quality patterns for the different period bands with the retrieved PPSD plot. The quality of a station is divided into 5 classes for each of the 4 bands. Classes A, B, C, D define regular quality between LNM and HNM while the fifth class represents out of order stations with gain problems, missing data etc. Over all period bands about 100 different patterns are required to classify most of the stations available on the IRIS server. The results are written to a file and stations can be filtered by quality. AAAA represents the best quality in all 4 bands. Also a differentiation between instrument types as broad band and short period stations is possible. A regular check using the IRIS SeedLink and Mustang service allow users to be informed about new stations with a specific quality.

  12. Improving Protein Fold Recognition by Deep Learning Networks

    NASA Astrophysics Data System (ADS)

    Jo, Taeho; Hou, Jie; Eickholt, Jesse; Cheng, Jianlin

    2015-12-01

    For accurate recognition of protein folds, a deep learning network method (DN-Fold) was developed to predict if a given query-template protein pair belongs to the same structural fold. The input used stemmed from the protein sequence and structural features extracted from the protein pair. We evaluated the performance of DN-Fold along with 18 different methods on Lindahl’s benchmark dataset and on a large benchmark set extracted from SCOP 1.75 consisting of about one million protein pairs, at three different levels of fold recognition (i.e., protein family, superfamily, and fold) depending on the evolutionary distance between protein sequences. The correct recognition rate of ensembled DN-Fold for Top 1 predictions is 84.5%, 61.5%, and 33.6% and for Top 5 is 91.2%, 76.5%, and 60.7% at family, superfamily, and fold levels, respectively. We also evaluated the performance of single DN-Fold (DN-FoldS), which showed the comparable results at the level of family and superfamily, compared to ensemble DN-Fold. Finally, we extended the binary classification problem of fold recognition to real-value regression task, which also show a promising performance. DN-Fold is freely available through a web server at http://iris.rnet.missouri.edu/dnfold.

  13. South American foF2 database using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gularte, Erika; Bilitza, Dieter; Carpintero, Daniel; Jaen, Juliana

    2016-07-01

    We present the first step towards a new database of the ionospheric parameter foF2 for the South American region. The foF2 parameter, being the maximum of the ionospheric electronic density profile and its main sculptor, is of great interest not only in atmospheric studies but also in the realm of radio propagation. Due to its importance, its large variability and the difficulty to model it in time and space, it was the subject of an intense study since decades ago. The current databases, used by the IRI (International Reference Ionosphere) model, and based on Fourier expansions, has been built in the 60s from the available ionosondes at that time; therefore, it is still short of South American data. The main goal of this work is to upgrade the database, incorporating the now available data compiled by the RAPEAS (Red Argentina para el Estudio de la Atmósfera Superior, Argentine Network for the Study of the Upper Atmosphere) network. Also, we developed an algorithm to study the foF2 variability, based on the modern technique of genetic algorithms, which has been successfully applied on other disciplines. One of the main advantages of this technique is its ability in working with many variables and with unfavorable samples. The results are compared with the IRI databases, and improvements to the latter are suggested. Finally, it is important to notice that the new database is designed so that new available data can be easily incorporated.

  14. Membership-degree preserving discriminant analysis with applications to face recognition.

    PubMed

    Yang, Zhangjing; Liu, Chuancai; Huang, Pu; Qian, Jianjun

    2013-01-01

    In pattern recognition, feature extraction techniques have been widely employed to reduce the dimensionality of high-dimensional data. In this paper, we propose a novel feature extraction algorithm called membership-degree preserving discriminant analysis (MPDA) based on the fisher criterion and fuzzy set theory for face recognition. In the proposed algorithm, the membership degree of each sample to particular classes is firstly calculated by the fuzzy k-nearest neighbor (FKNN) algorithm to characterize the similarity between each sample and class centers, and then the membership degree is incorporated into the definition of the between-class scatter and the within-class scatter. The feature extraction criterion via maximizing the ratio of the between-class scatter to the within-class scatter is applied. Experimental results on the ORL, Yale, and FERET face databases demonstrate the effectiveness of the proposed algorithm.

  15. The program complex for vocal recognition

    NASA Astrophysics Data System (ADS)

    Konev, Anton; Kostyuchenko, Evgeny; Yakimuk, Alexey

    2017-01-01

    This article discusses the possibility of applying the algorithm of determining the pitch frequency for the note recognition problems. Preliminary study of programs-analogues were carried out for programs with function “recognition of the music”. The software package based on the algorithm for pitch frequency calculation was implemented and tested. It was shown that the algorithm allows recognizing the notes in the vocal performance of the user. A single musical instrument, a set of musical instruments, and a human voice humming a tune can be the sound source. The input file is initially presented in the .wav format or is recorded in this format from a microphone. Processing is performed by sequentially determining the pitch frequency and conversion of its values to the note. According to test results, modification of algorithms used in the complex was planned.

  16. Research on gait-based human identification

    NASA Astrophysics Data System (ADS)

    Li, Youguo

    Gait recognition refers to automatic identification of individual based on his/her style of walking. This paper proposes a gait recognition method based on Continuous Hidden Markov Model with Mixture of Gaussians(G-CHMM). First, we initialize a Gaussian mix model for training image sequence with K-means algorithm, then train the HMM parameters using a Baum-Welch algorithm. These gait feature sequences can be trained and obtain a Continuous HMM for every person, therefore, the 7 key frames and the obtained HMM can represent each person's gait sequence. Finally, the recognition is achieved by Front algorithm. The experiments made on CASIA gait databases obtain comparatively high correction identification ratio and comparatively strong robustness for variety of bodily angle.

  17. Artificial intelligence tools for pattern recognition

    NASA Astrophysics Data System (ADS)

    Acevedo, Elena; Acevedo, Antonio; Felipe, Federico; Avilés, Pedro

    2017-06-01

    In this work, we present a system for pattern recognition that combines the power of genetic algorithms for solving problems and the efficiency of the morphological associative memories. We use a set of 48 tire prints divided into 8 brands of tires. The images have dimensions of 200 x 200 pixels. We applied Hough transform to obtain lines as main features. The number of lines obtained is 449. The genetic algorithm reduces the number of features to ten suitable lines that give thus the 100% of recognition. Morphological associative memories were used as evaluation function. The selection algorithms were Tournament and Roulette wheel. For reproduction, we applied one-point, two-point and uniform crossover.

  18. Exercise recognition for Kinect-based telerehabilitation.

    PubMed

    Antón, D; Goñi, A; Illarramendi, A

    2015-01-01

    An aging population and people's higher survival to diseases and traumas that leave physical consequences are challenging aspects in the context of an efficient health management. This is why telerehabilitation systems are being developed, to allow monitoring and support of physiotherapy sessions at home, which could reduce healthcare costs while also improving the quality of life of the users. Our goal is the development of a Kinect-based algorithm that provides a very accurate real-time monitoring of physical rehabilitation exercises and that also provides a friendly interface oriented both to users and physiotherapists. The two main constituents of our algorithm are the posture classification method and the exercises recognition method. The exercises consist of series of movements. Each movement is composed of an initial posture, a final posture and the angular trajectories of the limbs involved in the movement. The algorithm was designed and tested with datasets of real movements performed by volunteers. We also explain in the paper how we obtained the optimal values for the trade-off values for posture and trajectory recognition. Two relevant aspects of the algorithm were evaluated in our tests, classification accuracy and real-time data processing. We achieved 91.9% accuracy in posture classification and 93.75% accuracy in trajectory recognition. We also checked whether the algorithm was able to process the data in real-time. We found that our algorithm could process more than 20,000 postures per second and all the required trajectory data-series in real-time, which in practice guarantees no perceptible delays. Later on, we carried out two clinical trials with real patients that suffered shoulder disorders. We obtained an exercise monitoring accuracy of 95.16%. We present an exercise recognition algorithm that handles the data provided by Kinect efficiently. The algorithm has been validated in a real scenario where we have verified its suitability. Moreover, we have received a positive feedback from both users and the physiotherapists who took part in the tests.

  19. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dise, J; McDonald, D; Ashenafi, M

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels withmore » similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed positions would improve the algorithm speed.« less

  20. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    NASA Astrophysics Data System (ADS)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  1. Face recognition algorithm based on Gabor wavelet and locality preserving projections

    NASA Astrophysics Data System (ADS)

    Liu, Xiaojie; Shen, Lin; Fan, Honghui

    2017-07-01

    In order to solve the effects of illumination changes and differences of personal features on the face recognition rate, this paper presents a new face recognition algorithm based on Gabor wavelet and Locality Preserving Projections (LPP). The problem of the Gabor filter banks with high dimensions was solved effectively, and also the shortcoming of the LPP on the light illumination changes was overcome. Firstly, the features of global image information were achieved, which used the good spatial locality and orientation selectivity of Gabor wavelet filters. Then the dimensions were reduced by utilizing the LPP, which well-preserved the local information of the image. The experimental results shown that this algorithm can effectively extract the features relating to facial expressions, attitude and other information. Besides, it can reduce influence of the illumination changes and the differences in personal features effectively, which improves the face recognition rate to 99.2%.

  2. Analysis of objects in binary images. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.

    1991-01-01

    Digital image processing techniques are typically used to produce improved digital images through the application of successive enhancement techniques to a given image or to generate quantitative data about the objects within that image. In support of and to assist researchers in a wide range of disciplines, e.g., interferometry, heavy rain effects on aerodynamics, and structure recognition research, it is often desirable to count objects in an image and compute their geometric properties. Therefore, an image analysis application package, focusing on a subset of image analysis techniques used for object recognition in binary images, was developed. This report describes the techniques and algorithms utilized in three main phases of the application and are categorized as: image segmentation, object recognition, and quantitative analysis. Appendices provide supplemental formulas for the algorithms employed as well as examples and results from the various image segmentation techniques and the object recognition algorithm implemented.

  3. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  4. Automatic voice recognition using traditional and artificial neural network approaches

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1989-01-01

    The main objective of this research is to develop an algorithm for isolated-word recognition. This research is focused on digital signal analysis rather than linguistic analysis of speech. Features extraction is carried out by applying a Linear Predictive Coding (LPC) algorithm with order of 10. Continuous-word and speaker independent recognition will be considered in future study after accomplishing this isolated word research. To examine the similarity between the reference and the training sets, two approaches are explored. The first is implementing traditional pattern recognition techniques where a dynamic time warping algorithm is applied to align the two sets and calculate the probability of matching by measuring the Euclidean distance between the two sets. The second is implementing a backpropagation artificial neural net model with three layers as the pattern classifier. The adaptation rule implemented in this network is the generalized least mean square (LMS) rule. The first approach has been accomplished. A vocabulary of 50 words was selected and tested. The accuracy of the algorithm was found to be around 85 percent. The second approach is in progress at the present time.

  5. A fingerprint classification algorithm based on combination of local and global information

    NASA Astrophysics Data System (ADS)

    Liu, Chongjin; Fu, Xiang; Bian, Junjie; Feng, Jufu

    2011-12-01

    Fingerprint recognition is one of the most important technologies in biometric identification and has been wildly applied in commercial and forensic areas. Fingerprint classification, as the fundamental procedure in fingerprint recognition, can sharply decrease the quantity for fingerprint matching and improve the efficiency of fingerprint recognition. Most fingerprint classification algorithms are based on the number and position of singular points. Because the singular points detecting method only considers the local information commonly, the classification algorithms are sensitive to noise. In this paper, we propose a novel fingerprint classification algorithm combining the local and global information of fingerprint. Firstly we use local information to detect singular points and measure their quality considering orientation structure and image texture in adjacent areas. Furthermore the global orientation model is adopted to measure the reliability of singular points group. Finally the local quality and global reliability is weighted to classify fingerprint. Experiments demonstrate the accuracy and effectivity of our algorithm especially for the poor quality fingerprint images.

  6. Face recognition using total margin-based adaptive fuzzy support vector machines.

    PubMed

    Liu, Yi-Hung; Chen, Yen-Ting

    2007-01-01

    This paper presents a new classifier called total margin-based adaptive fuzzy support vector machines (TAF-SVM) that deals with several problems that may occur in support vector machines (SVMs) when applied to the face recognition. The proposed TAF-SVM not only solves the overfitting problem resulted from the outlier with the approach of fuzzification of the penalty, but also corrects the skew of the optimal separating hyperplane due to the very imbalanced data sets by using different cost algorithm. In addition, by introducing the total margin algorithm to replace the conventional soft margin algorithm, a lower generalization error bound can be obtained. Those three functions are embodied into the traditional SVM so that the TAF-SVM is proposed and reformulated in both linear and nonlinear cases. By using two databases, the Chung Yuan Christian University (CYCU) multiview and the facial recognition technology (FERET) face databases, and using the kernel Fisher's discriminant analysis (KFDA) algorithm to extract discriminating face features, experimental results show that the proposed TAF-SVM is superior to SVM in terms of the face-recognition accuracy. The results also indicate that the proposed TAF-SVM can achieve smaller error variances than SVM over a number of tests such that better recognition stability can be obtained.

  7. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  8. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  9. Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yu, Yongtao; Li, Jonathan; Wen, Chenglu; Guan, Haiyan; Luo, Huan; Wang, Cheng

    2016-03-01

    This paper presents a novel algorithm for detection and recognition of traffic signs in mobile laser scanning (MLS) data for intelligent transportation-related applications. The traffic sign detection task is accomplished based on 3-D point clouds by using bag-of-visual-phrases representations; whereas the recognition task is achieved based on 2-D images by using a Gaussian-Bernoulli deep Boltzmann machine-based hierarchical classifier. To exploit high-order feature encodings of feature regions, a deep Boltzmann machine-based feature encoder is constructed. For detecting traffic signs in 3-D point clouds, the proposed algorithm achieves an average recall, precision, quality, and F-score of 0.956, 0.946, 0.907, and 0.951, respectively, on the four selected MLS datasets. For on-image traffic sign recognition, a recognition accuracy of 97.54% is achieved by using the proposed hierarchical classifier. Comparative studies with the existing traffic sign detection and recognition methods demonstrate that our algorithm obtains promising, reliable, and high performance in both detecting traffic signs in 3-D point clouds and recognizing traffic signs on 2-D images.

  10. Indonesian Sign Language Number Recognition using SIFT Algorithm

    NASA Astrophysics Data System (ADS)

    Mahfudi, Isa; Sarosa, Moechammad; Andrie Asmara, Rosa; Azrino Gustalika, M.

    2018-04-01

    Indonesian sign language (ISL) is generally used for deaf individuals and poor people communication in communicating. They use sign language as their primary language which consists of 2 types of action: sign and finger spelling. However, not all people understand their sign language so that this becomes a problem for them to communicate with normal people. this problem also becomes a factor they are isolated feel from the social life. It needs a solution that can help them to be able to interacting with normal people. Many research that offers a variety of methods in solving the problem of sign language recognition based on image processing. SIFT (Scale Invariant Feature Transform) algorithm is one of the methods that can be used to identify an object. SIFT is claimed very resistant to scaling, rotation, illumination and noise. Using SIFT algorithm for Indonesian sign language recognition number result rate recognition to 82% with the use of a total of 100 samples image dataset consisting 50 sample for training data and 50 sample images for testing data. Change threshold value get affect the result of the recognition. The best value threshold is 0.45 with rate recognition of 94%.

  11. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  12. An Effective 3D Ear Acquisition System

    PubMed Central

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition. PMID:26061553

  13. Biometric Challenges for Future Deployments: A Study of the Impact of Geography, Climate, Culture, and Social Conditions on the Effective Collection of Biometrics

    DTIC Science & Technology

    2011-04-01

    to iris or face recognition. In addition, breathing the dust is prevented with something covering the mouth , which will cause problems with face...headgear that “obscures the hair or hairline”. [58] One database of face images has images of people with and without scarves over their mouth . [59... Electronic Systems Magazine. Vol 19, Number 9, September 2004. [58] U.S. Department of State. How to Apply [for a Passport] in Person. [On-line

  14. An Effective 3D Ear Acquisition System.

    PubMed

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  15. On the applicability of STDP-based learning mechanisms to spiking neuron network models

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.

    2016-11-01

    The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.

  16. Extending the depth of field in a fixed focus lens using axial colour

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Niamh; Dainty, Christopher; Goncharov, Alexander V.

    2017-11-01

    We propose a method of extending the depth of field (EDOF) of conventional lenses for a low cost iris recognition front-facing smartphone camera. Longitudinal chromatic aberration (LCA) can be induced in the lens by means of dual wavelength illumination. The EDOF region is then constructed from the sum of the adjacent depths of field from each wavelength illumination. The lens parameters can be found analytically with paraxial raytracing. The extended depth of field is dependant on the glass chosen and position of the near object point.

  17. Developing professional caregivers' empathy and emotional competencies through mindfulness-based stress reduction (MBSR): results of two proof-of-concept studies.

    PubMed

    Lamothe, Martin; McDuff, Pierre; Pastore, Yves D; Duval, Michel; Sultan, Serge

    2018-01-05

    To assess the feasibility and acceptability of a mindfulness-based stress reduction (MBSR)-based intervention and determine if the intervention is associated with a significant signal on empathy and emotional competencies. Two pre-post proof-of-concept studies. Participants were recruited at the University of Montreal's Psychology Department (Study 1) and the CHU Sainte-Justine Department of Hematology-Oncology (Study 2). Study 1: 12 students completed the 8-week programme (mean age 24, range 18-34). Study 2: 25 professionals completed the 8-week programme (mean age 48, range 27-63). Standard MBSR programme including 8-week mindfulness programme consisting of 8 consecutive weekly 2-hour sessions and a full-day silent retreat. Mindfulness as measured by the Mindful Attention Awareness Scale; empathy as measured by the Interpersonal Reactivity Index (IRI)'s Perspective Taking and Empathic Concern subscales; identification of one's own emotions and those of others as measured by the Profile of Emotional Competence (PEC)'s Identify my Emotions and Identify Others' Emotions subscales; emotional acceptance as measured by the Acceptance and Action Questionnaire-II (AAQ-II) and the Emotion Regulation Scale (ERQ)'s Expressive Suppression subscale; and recognition of emotions in others as measured by the Geneva Emotion Recognition Test (GERT). In both studies, retention rates (80%-81%) were acceptable. Participants who completed the programme improved on all measures except the PEC's Identify Others' Emotions and the IRI's Empathic Concern (Cohen's d median=0.92, range 45-1.72). In Study 2, favourable effects associated with the programme were maintained over 3 months on the PEC's Identify my Emotions, the AAQ-II, the ERQ's Expressive Suppression and the GERT. The programme was feasible and acceptable. It was associated with a significant signal on the following outcomes: perspective taking, the identification of one's own emotions and emotional acceptance, thus, justifying moving towards efficacy trials using these outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Automated target recognition and tracking using an optical pattern recognition neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1991-01-01

    The on-going development of an automatic target recognition and tracking system at the Jet Propulsion Laboratory is presented. This system is an optical pattern recognition neural network (OPRNN) that is an integration of an innovative optical parallel processor and a feature extraction based neural net training algorithm. The parallel optical processor provides high speed and vast parallelism as well as full shift invariance. The neural network algorithm enables simultaneous discrimination of multiple noisy targets in spite of their scales, rotations, perspectives, and various deformations. This fully developed OPRNN system can be effectively utilized for the automated spacecraft recognition and tracking that will lead to success in the Automated Rendezvous and Capture (AR&C) of the unmanned Cargo Transfer Vehicle (CTV). One of the most powerful optical parallel processors for automatic target recognition is the multichannel correlator. With the inherent advantages of parallel processing capability and shift invariance, multiple objects can be simultaneously recognized and tracked using this multichannel correlator. This target tracking capability can be greatly enhanced by utilizing a powerful feature extraction based neural network training algorithm such as the neocognitron. The OPRNN, currently under investigation at JPL, is constructed with an optical multichannel correlator where holographic filters have been prepared using the neocognitron training algorithm. The computation speed of the neocognitron-type OPRNN is up to 10(exp 14) analog connections/sec that enabling the OPRNN to outperform its state-of-the-art electronics counterpart by at least two orders of magnitude.

  19. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  20. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  1. Key features for ATA / ATR database design in missile systems

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2017-05-01

    Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.

  2. A 2D range Hausdorff approach to 3D facial recognition.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koch, Mark William; Russ, Trina Denise; Little, Charles Quentin

    2004-11-01

    This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and templatemore » datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.« less

  3. Autoregressive statistical pattern recognition algorithms for damage detection in civil structures

    NASA Astrophysics Data System (ADS)

    Yao, Ruigen; Pakzad, Shamim N.

    2012-08-01

    Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.

  4. Wheezing recognition algorithm using recordings of respiratory sounds at the mouth in a pediatric population.

    PubMed

    Bokov, Plamen; Mahut, Bruno; Flaud, Patrice; Delclaux, Christophe

    2016-03-01

    Respiratory diseases in children are a common reason for physician visits. A diagnostic difficulty arises when parents hear wheezing that is no longer present during the medical consultation. Thus, an outpatient objective tool for recognition of wheezing is of clinical value. We developed a wheezing recognition algorithm from recorded respiratory sounds with a Smartphone placed near the mouth. A total of 186 recordings were obtained in a pediatric emergency department, mostly in toddlers (mean age 20 months). After exclusion of recordings with artefacts and those with a single clinical operator auscultation, 95 recordings with the agreement of two operators on auscultation diagnosis (27 with wheezing and 68 without) were subjected to a two phase algorithm (signal analysis and pattern classifier using machine learning algorithms) to classify records. The best performance (71.4% sensitivity and 88.9% specificity) was observed with a Support Vector Machine-based algorithm. We further tested the algorithm over a set of 39 recordings having a single operator and found a fair agreement (kappa=0.28, CI95% [0.12, 0.45]) between the algorithm and the operator. The main advantage of such an algorithm is its use in contact-free sound recording, thus valuable in the pediatric population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  6. Aided target recognition processing of MUDSS sonar data

    NASA Astrophysics Data System (ADS)

    Lau, Brian; Chao, Tien-Hsin

    1998-09-01

    The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.

  7. Classifier dependent feature preprocessing methods

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M., II; Peterson, Gilbert L.

    2008-04-01

    In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.

  8. Recognition of plant parts with problem-specific algorithms

    NASA Astrophysics Data System (ADS)

    Schwanke, Joerg; Brendel, Thorsten; Jensch, Peter F.; Megnet, Roland

    1994-06-01

    Automatic micropropagation is necessary to produce cost-effective high amounts of biomass. Juvenile plants are dissected in clean- room environment on particular points on the stem or the leaves. A vision-system detects possible cutting points and controls a specialized robot. This contribution is directed to the pattern- recognition algorithms to detect structural parts of the plant.

  9. Enhanced facial texture illumination normalization for face recognition.

    PubMed

    Luo, Yong; Guan, Ye-Peng

    2015-08-01

    An uncontrolled lighting condition is one of the most critical challenges for practical face recognition applications. An enhanced facial texture illumination normalization method is put forward to resolve this challenge. An adaptive relighting algorithm is developed to improve the brightness uniformity of face images. Facial texture is extracted by using an illumination estimation difference algorithm. An anisotropic histogram-stretching algorithm is proposed to minimize the intraclass distance of facial skin and maximize the dynamic range of facial texture distribution. Compared with the existing methods, the proposed method can more effectively eliminate the redundant information of facial skin and illumination. Extensive experiments show that the proposed method has superior performance in normalizing illumination variation and enhancing facial texture features for illumination-insensitive face recognition.

  10. Computer Recognition of Facial Profiles

    DTIC Science & Technology

    1974-08-01

    facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class

  11. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  12. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  13. A Lightweight Hierarchical Activity Recognition Framework Using Smartphone Sensors

    PubMed Central

    Han, Manhyung; Bang, Jae Hun; Nugent, Chris; McClean, Sally; Lee, Sungyoung

    2014-01-01

    Activity recognition for the purposes of recognizing a user's intentions using multimodal sensors is becoming a widely researched topic largely based on the prevalence of the smartphone. Previous studies have reported the difficulty in recognizing life-logs by only using a smartphone due to the challenges with activity modeling and real-time recognition. In addition, recognizing life-logs is difficult due to the absence of an established framework which enables the use of different sources of sensor data. In this paper, we propose a smartphone-based Hierarchical Activity Recognition Framework which extends the Naïve Bayes approach for the processing of activity modeling and real-time activity recognition. The proposed algorithm demonstrates higher accuracy than the Naïve Bayes approach and also enables the recognition of a user's activities within a mobile environment. The proposed algorithm has the ability to classify fifteen activities with an average classification accuracy of 92.96%. PMID:25184486

  14. Measurement and design of refractive corrections using ultrafast laser-induced intra-tissue refractive index shaping in live cats

    NASA Astrophysics Data System (ADS)

    Brooks, Daniel R.; Wozniak, Kaitlin T.; Knox, Wayne; Ellis, Jonathan D.; Huxlin, Krystel R.

    2018-02-01

    Intra-Tissue Refractive Index Shaping (IRIS) uses a 405 nm femtosecond laser focused into the stromal region of the cornea to induce a local refractive index change through multiphoton absorption. This refractive index change can be tailored through scanning of the focal region and variations in laser power to create refractive structures, such as gradient index lenses for visual refractive correction. Previously, IRIS was used to create 2.5 mm wide, square, -1 D cylindrical refractive structures in living cats. In the present work, we first wrote 400 μm wide bars of refractive index change at varying powers in enucleated cat globes using a custom flexure-based scanning system. The cornea and surrounding sclera were then removed and mounted into a wet cell. The induced optical phase change was measured with a Mach- Zehnder Interferometer (MZI), and appeared as fringe displacement, whose magnitude was proportional to the refractive index change. The interferograms produced by the MZI were analyzed with a Fourier Transform based algorithm in order to extract the phase change. This provided a phase change versus laser power calibration, which was then used to design the scanning and laser power distribution required to create -1.5 D cylindrical Fresnel lenses in cat cornea covering an area 6 mm in diameter. This prescription was inscribed into the corneas of one eye each of two living cats, under surgical anesthesia. It was then verified in vivo by contrasting wavefront aberration measurements collected pre- IRIS with those obtained over six months post-IRIS using a Shack-Hartmann wavefront sensor.

  15. Real-Time Detection and Measurement of Eye Features from Color Images

    PubMed Central

    Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu

    2016-01-01

    The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids) is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database. PMID:27438838

  16. Approach to recognition of flexible form for credit card expiration date recognition as example

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Nikolaev, Dmitry P.; Ingacheva, Anastasia; Skoryukina, Natalya

    2015-12-01

    In this paper we consider a task of finding information fields within document with flexible form for credit card expiration date field as example. We discuss main difficulties and suggest possible solutions. In our case this task is to be solved on mobile devices therefore computational complexity has to be as low as possible. In this paper we provide results of the analysis of suggested algorithm. Error distribution of the recognition system shows that suggested algorithm solves the task with required accuracy.

  17. Comparison of crisp and fuzzy character networks in handwritten word recognition

    NASA Technical Reports Server (NTRS)

    Gader, Paul; Mohamed, Magdi; Chiang, Jung-Hsien

    1992-01-01

    Experiments involving handwritten word recognition on words taken from images of handwritten address blocks from the United States Postal Service mailstream are described. The word recognition algorithm relies on the use of neural networks at the character level. The neural networks are trained using crisp and fuzzy desired outputs. The fuzzy outputs were defined using a fuzzy k-nearest neighbor algorithm. The crisp networks slightly outperformed the fuzzy networks at the character level but the fuzzy networks outperformed the crisp networks at the word level.

  18. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    PubMed

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-07-23

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  19. Study on recognition algorithm for paper currency numbers based on neural network

    NASA Astrophysics Data System (ADS)

    Li, Xiuyan; Liu, Tiegen; Li, Yuanyao; Zhang, Zhongchuan; Deng, Shichao

    2008-12-01

    Based on the unique characteristic, the paper currency numbers can be put into record and the automatic identification equipment for paper currency numbers is supplied to currency circulation market in order to provide convenience for financial sectors to trace the fiduciary circulation socially and provide effective supervision on paper currency. Simultaneously it is favorable for identifying forged notes, blacklisting the forged notes numbers and solving the major social problems, such as armor cash carrier robbery, money laundering. For the purpose of recognizing the paper currency numbers, a recognition algorithm based on neural network is presented in the paper. Number lines in original paper currency images can be draw out through image processing, such as image de-noising, skew correction, segmentation, and image normalization. According to the different characteristics between digits and letters in serial number, two kinds of classifiers are designed. With the characteristics of associative memory, optimization-compute and rapid convergence, the Discrete Hopfield Neural Network (DHNN) is utilized to recognize the letters; with the characteristics of simple structure, quick learning and global optimum, the Radial-Basis Function Neural Network (RBFNN) is adopted to identify the digits. Then the final recognition results are obtained by combining the two kinds of recognition results in regular sequence. Through the simulation tests, it is confirmed by simulation results that the recognition algorithm of combination of two kinds of recognition methods has such advantages as high recognition rate and faster recognition simultaneously, which is worthy of broad application prospect.

  20. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    PubMed Central

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-01-01

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other. PMID:26213932

  1. Basics of identification measurement technology

    NASA Astrophysics Data System (ADS)

    Klikushin, Yu N.; Kobenko, V. Yu; Stepanov, P. P.

    2018-01-01

    All available algorithms and suitable for pattern recognition do not give 100% guarantee, therefore there is a field of scientific night activity in this direction, studies are relevant. It is proposed to develop existing technologies for pattern recognition in the form of application of identification measurements. The purpose of the study is to identify the possibility of recognizing images using identification measurement technologies. In solving problems of pattern recognition, neural networks and hidden Markov models are mainly used. A fundamentally new approach to the solution of problems of pattern recognition based on the technology of identification signal measurements (IIS) is proposed. The essence of IIS technology is the quantitative evaluation of the shape of images using special tools and algorithms.

  2. A study of speech emotion recognition based on hybrid algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Ju-xia; Zhang, Chao; Lv, Zhao; Rao, Yao-quan; Wu, Xiao-pei

    2011-10-01

    To effectively improve the recognition accuracy of the speech emotion recognition system, a hybrid algorithm which combines Continuous Hidden Markov Model (CHMM), All-Class-in-One Neural Network (ACON) and Support Vector Machine (SVM) is proposed. In SVM and ACON methods, some global statistics are used as emotional features, while in CHMM method, instantaneous features are employed. The recognition rate by the proposed method is 92.25%, with the rejection rate to be 0.78%. Furthermore, it obtains the relative increasing of 8.53%, 4.69% and 0.78% compared with ACON, CHMM and SVM methods respectively. The experiment result confirms the efficiency of distinguishing anger, happiness, neutral and sadness emotional states.

  3. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  4. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian

    2008-04-01

    Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.

  5. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    PubMed Central

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767

  6. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    PubMed

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.

  7. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  8. Improving a HMM-based off-line handwriting recognition system using MME-PSO optimization

    NASA Astrophysics Data System (ADS)

    Hamdani, Mahdi; El Abed, Haikal; Hamdani, Tarek M.; Märgner, Volker; Alimi, Adel M.

    2011-01-01

    One of the trivial steps in the development of a classifier is the design of its architecture. This paper presents a new algorithm, Multi Models Evolvement (MME) using Particle Swarm Optimization (PSO). This algorithm is a modified version of the basic PSO, which is used to the unsupervised design of Hidden Markov Model (HMM) based architectures. For instance, the proposed algorithm is applied to an Arabic handwriting recognizer based on discrete probability HMMs. After the optimization of their architectures, HMMs are trained with the Baum- Welch algorithm. The validation of the system is based on the IfN/ENIT database. The performance of the developed approach is compared to the participating systems at the 2005 competition organized on Arabic handwriting recognition on the International Conference on Document Analysis and Recognition (ICDAR). The final system is a combination between an optimized HMM with 6 other HMMs obtained by a simple variation of the number of states. An absolute improvement of 6% of word recognition rate with about 81% is presented. This improvement is achieved comparing to the basic system (ARAB-IfN). The proposed recognizer outperforms also most of the known state-of-the-art systems.

  9. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  10. What does voice-processing technology support today?

    PubMed Central

    Nakatsu, R; Suzuki, Y

    1995-01-01

    This paper describes the state of the art in applications of voice-processing technologies. In the first part, technologies concerning the implementation of speech recognition and synthesis algorithms are described. Hardware technologies such as microprocessors and DSPs (digital signal processors) are discussed. Software development environment, which is a key technology in developing applications software, ranging from DSP software to support software also is described. In the second part, the state of the art of algorithms from the standpoint of applications is discussed. Several issues concerning evaluation of speech recognition/synthesis algorithms are covered, as well as issues concerning the robustness of algorithms in adverse conditions. Images Fig. 3 PMID:7479720

  11. Detection of trace explosives on relevant substrates using a mobile platform for photothermal infrared imaging spectroscopy (PT-IRIS)

    NASA Astrophysics Data System (ADS)

    Kendziora, Christopher A.; Furstenberg, Robert; Papantonakis, Michael; Nguyen, Viet; Byers, Jeff; McGill, R. Andrew

    2015-05-01

    This manuscript describes the results of recent tests regarding standoff detection of trace explosives on relevant substrates using a mobile platform. We are developing a technology for detection based on photo-thermal infrared (IR) imaging spectroscopy (PT-IRIS). This approach leverages one or more microfabricated IR quantum cascade lasers, tuned to strong absorption bands in the analytes and directed to illuminate an area on a surface of interest. An IR focal plane array is used to image the surface thermal emission upon laser illumination. The PT-IRIS signal is processed as a hyperspectral image cube comprised of spatial, spectral and temporal dimensions as vectors within a detection algorithm. Increased sensitivity to explosives and selectivity between different analyte types is achieved by narrow bandpass IR filters in the collection path. We have previously demonstrated the technique at several meters of stand-off distance indoors and in field tests, while operating the lasers below the infrared eye-safe intensity limit (100 mW/cm2). Sensitivity to explosive traces as small as a single 10 μm diameter particle (~1 ng) has been demonstrated. Analytes tested here include RDX, TNT, ammonium nitrate and sucrose. The substrates tested in this current work include metal, plastics, glass and painted car panels.

  12. On the creation of high spatial resolution imaging spectroscopy data from multi-temporal low spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yao, Wei; van Aardt, Jan; Messinger, David

    2017-05-01

    The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.

  13. A multifaceted independent performance analysis of facial subspace recognition algorithms.

    PubMed

    Bajwa, Usama Ijaz; Taj, Imtiaz Ahmad; Anwar, Muhammad Waqas; Wang, Xuan

    2013-01-01

    Face recognition has emerged as the fastest growing biometric technology and has expanded a lot in the last few years. Many new algorithms and commercial systems have been proposed and developed. Most of them use Principal Component Analysis (PCA) as a base for their techniques. Different and even conflicting results have been reported by researchers comparing these algorithms. The purpose of this study is to have an independent comparative analysis considering both performance and computational complexity of six appearance based face recognition algorithms namely PCA, 2DPCA, A2DPCA, (2D)(2)PCA, LPP and 2DLPP under equal working conditions. This study was motivated due to the lack of unbiased comprehensive comparative analysis of some recent subspace methods with diverse distance metric combinations. For comparison with other studies, FERET, ORL and YALE databases have been used with evaluation criteria as of FERET evaluations which closely simulate real life scenarios. A comparison of results with previous studies is performed and anomalies are reported. An important contribution of this study is that it presents the suitable performance conditions for each of the algorithms under consideration.

  14. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    PubMed Central

    Jung, Jaehoon; Yoon, Inhye; Paik, Joonki

    2016-01-01

    This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978

  15. A novel rotational invariants target recognition method for rotating motion blurred images

    NASA Astrophysics Data System (ADS)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  16. An evolution based biosensor receptor DNA sequence generation algorithm.

    PubMed

    Kim, Eungyeong; Lee, Malrey; Gatton, Thomas M; Lee, Jaewan; Zang, Yupeng

    2010-01-01

    A biosensor is composed of a bioreceptor, an associated recognition molecule, and a signal transducer that can selectively detect target substances for analysis. DNA based biosensors utilize receptor molecules that allow hybridization with the target analyte. However, most DNA biosensor research uses oligonucleotides as the target analytes and does not address the potential problems of real samples. The identification of recognition molecules suitable for real target analyte samples is an important step towards further development of DNA biosensors. This study examines the characteristics of DNA used as bioreceptors and proposes a hybrid evolution-based DNA sequence generating algorithm, based on DNA computing, to identify suitable DNA bioreceptor recognition molecules for stable hybridization with real target substances. The Traveling Salesman Problem (TSP) approach is applied in the proposed algorithm to evaluate the safety and fitness of the generated DNA sequences. This approach improves efficiency and stability for enhanced and variable-length DNA sequence generation and allows extension to generation of variable-length DNA sequences with diverse receptor recognition requirements.

  17. Gabor filter based fingerprint image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Jin-Xiang

    2013-03-01

    Fingerprint recognition technology has become the most reliable biometric technology due to its uniqueness and invariance, which has been most convenient and most reliable technique for personal authentication. The development of Automated Fingerprint Identification System is an urgent need for modern information security. Meanwhile, fingerprint preprocessing algorithm of fingerprint recognition technology has played an important part in Automatic Fingerprint Identification System. This article introduces the general steps in the fingerprint recognition technology, namely the image input, preprocessing, feature recognition, and fingerprint image enhancement. As the key to fingerprint identification technology, fingerprint image enhancement affects the accuracy of the system. It focuses on the characteristics of the fingerprint image, Gabor filters algorithm for fingerprint image enhancement, the theoretical basis of Gabor filters, and demonstration of the filter. The enhancement algorithm for fingerprint image is in the windows XP platform with matlab.65 as a development tool for the demonstration. The result shows that the Gabor filter is effective in fingerprint image enhancement technology.

  18. An Energy-Efficient and Scalable Deep Learning/Inference Processor With Tetra-Parallel MIMD Architecture for Big Data Applications.

    PubMed

    Park, Seong-Wook; Park, Junyoung; Bong, Kyeongryeol; Shin, Dongjoo; Lee, Jinmook; Choi, Sungpill; Yoo, Hoi-Jun

    2015-12-01

    Deep Learning algorithm is widely used for various pattern recognition applications such as text recognition, object recognition and action recognition because of its best-in-class recognition accuracy compared to hand-crafted algorithm and shallow learning based algorithms. Long learning time caused by its complex structure, however, limits its usage only in high-cost servers or many-core GPU platforms so far. On the other hand, the demand on customized pattern recognition within personal devices will grow gradually as more deep learning applications will be developed. This paper presents a SoC implementation to enable deep learning applications to run with low cost platforms such as mobile or portable devices. Different from conventional works which have adopted massively-parallel architecture, this work adopts task-flexible architecture and exploits multiple parallelism to cover complex functions of convolutional deep belief network which is one of popular deep learning/inference algorithms. In this paper, we implement the most energy-efficient deep learning and inference processor for wearable system. The implemented 2.5 mm × 4.0 mm deep learning/inference processor is fabricated using 65 nm 8-metal CMOS technology for a battery-powered platform with real-time deep inference and deep learning operation. It consumes 185 mW average power, and 213.1 mW peak power at 200 MHz operating frequency and 1.2 V supply voltage. It achieves 411.3 GOPS peak performance and 1.93 TOPS/W energy efficiency, which is 2.07× higher than the state-of-the-art.

  19. Products recognition on shop-racks from local scale-invariant features

    NASA Astrophysics Data System (ADS)

    Zawistowski, Jacek; Kurzejamski, Grzegorz; Garbat, Piotr; Naruniec, Jacek

    2016-04-01

    This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barstow, Del R; Patlolla, Dilip Reddy; Mann, Christopher J

    Abstract The data captured by existing standoff biometric systems typically has lower biometric recognition performance than their close range counterparts due to imaging challenges, pose challenges, and other factors. To assist in overcoming these limitations systems typically perform in a multi-modal capacity such as Honeywell s Combined Face and Iris (CFAIRS) [21] system. While this improves the systems performance, standoff systems have yet to be proven as accurate as their close range equivalents. We will present a standoff system capable of operating up to 7 meters in range. Unlike many systems such as the CFAIRS our system captures high qualitymore » 12 MP video allowing for a multi-sample as well as multi-modal comparison. We found that for standoff systems multi-sample improved performance more than multi-modal. For a small test group of 50 subjects we were able to achieve 100% rank one recognition performance with our system.« less

  1. Quest Hierarchy for Hyperspectral Face Recognition

    DTIC Science & Technology

    2011-03-01

    numerous face recognition algorithms available, several very good literature surveys are available that include Abate [29], Samal [110], Kong [18], Zou...Perception, Japan (January 1994). [110] Samal , Ashok and P. Iyengar, Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey

  2. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  3. A Genetic Algorithm That Exchanges Neighboring Centers for Fuzzy c-Means Clustering

    ERIC Educational Resources Information Center

    Chahine, Firas Safwan

    2012-01-01

    Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major…

  4. ORAC-DR -- spectroscopy data reduction

    NASA Astrophysics Data System (ADS)

    Hirst, Paul; Cavanagh, Brad

    ORAC-DR is a general-purpose automatic data-reduction pipeline environment. This document describes its use to reduce spectroscopy data collected at the United Kingdom Infrared Telescope (UKIRT) with the CGS4, UIST and Michelle instruments, at the Anglo-Australian Telescope (AAT) with the IRIS2 instrument, and from the Very Large Telescope with ISAAC. It outlines the algorithms used and how to make minor modifications of them, and how to correct for errors made at the telescope.

  5. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    PubMed Central

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque

    2018-01-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam. PMID:29389845

  6. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    PubMed

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  7. Feature Selection in Classification of Eye Movements Using Electrooculography for Activity Recognition

    PubMed Central

    Mala, S.; Latha, K.

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185

  8. Feature selection in classification of eye movements using electrooculography for activity recognition.

    PubMed

    Mala, S; Latha, K

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.

  9. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  10. Influence of time and length size feature selections for human activity sequences recognition.

    PubMed

    Fang, Hongqing; Chen, Long; Srinivasan, Raghavendiran

    2014-01-01

    In this paper, Viterbi algorithm based on a hidden Markov model is applied to recognize activity sequences from observed sensors events. Alternative features selections of time feature values of sensors events and activity length size feature values are tested, respectively, and then the results of activity sequences recognition performances of Viterbi algorithm are evaluated. The results show that the selection of larger time feature values of sensor events and/or smaller activity length size feature values will generate relatively better results on the activity sequences recognition performances. © 2013 ISA Published by ISA All rights reserved.

  11. Stereo vision with distance and gradient recognition

    NASA Astrophysics Data System (ADS)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  12. Near-infrared face recognition utilizing open CV software

    NASA Astrophysics Data System (ADS)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  13. FAQs about IRIs.

    ERIC Educational Resources Information Center

    Paris, Scott G.; Carpenter, Robert D.

    2003-01-01

    Provides information about informal reading inventories (IRI), an early reading assessment. Explains what IRIs are; who should administer an IRI; when and how to administer an IRI; the reliability of data from IRIs; and the limitations of IRIs. (PM)

  14. Fusing face-verification algorithms and humans.

    PubMed

    O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon

    2007-10-01

    It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.

  15. A DFT-Based Method of Feature Extraction for Palmprint Recognition

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Karungaru, Stephen G.; Tsuge, Satoru; Fukumi, Minoru

    Over the last quarter century, research in biometric systems has developed at a breathtaking pace and what started with the focus on the fingerprint has now expanded to include face, voice, iris, and behavioral characteristics such as gait. Palmprint is one of the most recent additions, and is currently the subject of great research interest due to its inherent uniqueness, stability, user-friendliness and ease of acquisition. This paper describes an effective and procedurally simple method of palmprint feature extraction specifically for palmprint recognition, although verification experiments are also conducted. This method takes advantage of the correspondences that exist between prominent palmprint features or objects in the spatial domain with those in the frequency or Fourier domain. Multi-dimensional feature vectors are formed by extracting a GA-optimized set of points from the 2-D Fourier spectrum of the palmprint images. The feature vectors are then used for palmprint recognition, before and after dimensionality reduction via the Karhunen-Loeve Transform (KLT). Experiments performed using palmprint images from the ‘PolyU Palmprint Database’ indicate that using a compact set of DFT coefficients, combined with KLT and data preprocessing, produces a recognition accuracy of more than 98% and can provide a fast and effective technique for personal identification.

  16. The MITLL NIST LRE 2015 Language Recognition System

    DTIC Science & Technology

    2016-05-06

    The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...most recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission...Task The National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In

  17. The MITLL NIST LRE 2015 Language Recognition system

    DTIC Science & Technology

    2016-02-05

    The MITLL NIST LRE 2015 Language Recognition System Pedro Torres-Carrasquillo, Najim Dehak*, Elizabeth Godoy, Douglas Reynolds, Fred Richardson...recent MIT Lincoln Laboratory language recognition system developed for the NIST 2015 Language Recognition Evaluation (LRE). The submission features a...National Institute of Science and Technology ( NIST ) has conducted formal evaluations of language detection algorithms since 1994. In previous

  18. Effectiveness of feature and classifier algorithms in character recognition systems

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.

    1993-04-01

    At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.

  19. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  20. [Algorithm for the automated processing of rheosignals].

    PubMed

    Odinets, G S

    1988-01-01

    Algorithm for rheosignals recognition for a microprocessing device with a representation apparatus and with automated and manual cursor control was examined. The algorithm permits to automate rheosignals registrating and processing taking into account their changeability.

  1. A dynamical pattern recognition model of gamma activity in auditory cortex

    PubMed Central

    Zavaglia, M.; Canolty, R.T.; Schofield, T.M.; Leff, A.P.; Ursino, M.; Knight, R.T.; Penny, W.D.

    2012-01-01

    This paper describes a dynamical process which serves both as a model of temporal pattern recognition in the brain and as a forward model of neuroimaging data. This process is considered at two separate levels of analysis: the algorithmic and implementation levels. At an algorithmic level, recognition is based on the use of Occurrence Time features. Using a speech digit database we show that for noisy recognition environments, these features rival standard cepstral coefficient features. At an implementation level, the model is defined using a Weakly Coupled Oscillator (WCO) framework and uses a transient synchronization mechanism to signal a recognition event. In a second set of experiments, we use the strength of the synchronization event to predict the high gamma (75–150 Hz) activity produced by the brain in response to word versus non-word stimuli. Quantitative model fits allow us to make inferences about parameters governing pattern recognition dynamics in the brain. PMID:22327049

  2. Structural model constructing for optical handwritten character recognition

    NASA Astrophysics Data System (ADS)

    Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.

    2017-02-01

    The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.

  3. Unsupervised learning in persistent sensing for target recognition by wireless ad hoc networks of ground-based sensors

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    In previous work by the author, effective persistent and pervasive sensing for recognition and tracking of battlefield targets were seen to be achieved, using intelligent algorithms implemented by distributed mobile agents over a composite system of unmanned aerial vehicles (UAVs) for persistence and a wireless network of unattended ground sensors for pervasive coverage of the mission environment. While simulated performance results for the supervised algorithms of the composite system are shown to provide satisfactory target recognition over relatively brief periods of system operation, this performance can degrade by as much as 50% as target dynamics in the environment evolve beyond the period of system operation in which the training data are representative. To overcome this limitation, this paper applies the distributed approach using mobile agents to the network of ground-based wireless sensors alone, without the UAV subsystem, to provide persistent as well as pervasive sensing for target recognition and tracking. The supervised algorithms used in the earlier work are supplanted by unsupervised routines, including competitive-learning neural networks (CLNNs) and new versions of support vector machines (SVMs) for characterization of an unknown target environment. To capture the same physical phenomena from battlefield targets as the composite system, the suite of ground-based sensors can be expanded to include imaging and video capabilities. The spatial density of deployed sensor nodes is increased to allow more precise ground-based location and tracking of detected targets by active nodes. The "swarm" mobile agents enabling WSN intelligence are organized in a three processing stages: detection, recognition and sustained tracking of ground targets. Features formed from the compressed sensor data are down-selected according to an information-theoretic algorithm that reduces redundancy within the feature set, reducing the dimension of samples used in the target recognition and tracking routines. Target tracking is based on simplified versions of Kalman filtration. Accuracy of recognition and tracking of implemented versions of the proposed suite of unsupervised algorithms is somewhat degraded from the ideal. Target recognition and tracking by supervised routines and by unsupervised SVM and CLNN routines in the ground-based WSN is evaluated in simulations using published system values and sensor data from vehicular targets in ground-surveillance scenarios. Results are compared with previously published performance for the system of the ground-based sensor network (GSN) and UAV swarm.

  4. Discriminating Phytoplankton Functional Types (PFTs) in the Coastal Ocean Using the Inversion Algorithm Phydotax and Airborne Imaging Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.

    2013-01-01

    There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in-land water bodies. Results presented are from the 10 April 2013 overflight of the Monterey Bay region and focus primarily on the first objective - sensitivity to atmospheric correction. On-going and future work will continue to evaluate if PHYDOTax can be applied to historical (SeaWiFS and MERIS), existing (MODIS, VIIRS, and HICO), and future (PACE, GEO-CAPE, and HyspIRI) satellite sensors. Demonstration of cross-platform continuity may aid in calibration and validation efforts of these sensors.

  5. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    PubMed

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  6. The Maximum Likelihood Estimation of Signature Transformation /MLEST/ algorithm. [for affine transformation of crop inventory data

    NASA Technical Reports Server (NTRS)

    Thadani, S. G.

    1977-01-01

    The Maximum Likelihood Estimation of Signature Transformation (MLEST) algorithm is used to obtain maximum likelihood estimates (MLE) of affine transformation. The algorithm has been evaluated for three sets of data: simulated (training and recognition segment pairs), consecutive-day (data gathered from Landsat images), and geographical-extension (large-area crop inventory experiment) data sets. For each set, MLEST signature extension runs were made to determine MLE values and the affine-transformed training segment signatures were used to classify the recognition segments. The classification results were used to estimate wheat proportions at 0 and 1% threshold values.

  7. Tracking Small Artists

    NASA Astrophysics Data System (ADS)

    Russell, James C.; Klette, Reinhard; Chen, Chia-Yen

    Tracks of small animals are important in environmental surveillance, where pattern recognition algorithms allow species identification of the individuals creating tracks. These individuals can also be seen as artists, presented in their natural environments with a canvas upon which they can make prints. We present tracks of small mammals and reptiles which have been collected for identification purposes, and re-interpret them from an esthetic point of view. We re-classify these tracks not by their geometric qualities as pattern recognition algorithms would, but through interpreting the 'artist', their brush strokes and intensity. We describe the algorithms used to enhance and present the work of the 'artists'.

  8. Three-dimensional fingerprint recognition by using convolution neural network

    NASA Astrophysics Data System (ADS)

    Tian, Qianyu; Gao, Nan; Zhang, Zonghua

    2018-01-01

    With the development of science and technology and the improvement of social information, fingerprint recognition technology has become a hot research direction and been widely applied in many actual fields because of its feasibility and reliability. The traditional two-dimensional (2D) fingerprint recognition method relies on matching feature points. This method is not only time-consuming, but also lost three-dimensional (3D) information of fingerprint, with the fingerprint rotation, scaling, damage and other issues, a serious decline in robustness. To solve these problems, 3D fingerprint has been used to recognize human being. Because it is a new research field, there are still lots of challenging problems in 3D fingerprint recognition. This paper presents a new 3D fingerprint recognition method by using a convolution neural network (CNN). By combining 2D fingerprint and fingerprint depth map into CNN, and then through another CNN feature fusion, the characteristics of the fusion complete 3D fingerprint recognition after classification. This method not only can preserve 3D information of fingerprints, but also solves the problem of CNN input. Moreover, the recognition process is simpler than traditional feature point matching algorithm. 3D fingerprint recognition rate by using CNN is compared with other fingerprint recognition algorithms. The experimental results show that the proposed 3D fingerprint recognition method has good recognition rate and robustness.

  9. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  10. Real-time skeleton tracking for embedded systems

    NASA Astrophysics Data System (ADS)

    Coleca, Foti; Klement, Sascha; Martinetz, Thomas; Barth, Erhardt

    2013-03-01

    Touch-free gesture technology is beginning to become more popular with consumers and may have a significant future impact on interfaces for digital photography. However, almost every commercial software framework for gesture and pose detection is aimed at either desktop PCs or high-powered GPUs, making mobile implementations for gesture recognition an attractive area for research and development. In this paper we present an algorithm for hand skeleton tracking and gesture recognition that runs on an ARM-based platform (Pandaboard ES, OMAP 4460 architecture). The algorithm uses self-organizing maps to fit a given topology (skeleton) into a 3D point cloud. This is a novel way of approaching the problem of pose recognition as it does not employ complex optimization techniques or data-based learning. After an initial background segmentation step, the algorithm is ran in parallel with heuristics, which detect and correct artifacts arising from insufficient or erroneous input data. We then optimize the algorithm for the ARM platform using fixed-point computation and the NEON SIMD architecture the OMAP4460 provides. We tested the algorithm with two different depth-sensing devices (Microsoft Kinect, PMD Camboard). For both input devices we were able to accurately track the skeleton at the native framerate of the cameras.

  11. Antigen-Specific Interferon-Gamma Responses and Innate Cytokine Balance in TB-IRIS

    PubMed Central

    Goovaerts, Odin; Jennes, Wim; Massinga-Loembé, Marguerite; Ceulemans, Ann; Worodria, William; Mayanja-Kizza, Harriet; Colebunders, Robert; Kestens, Luc

    2014-01-01

    Background Tuberculosis-associated immune reconstitution inflammatory syndrome (TB-IRIS) remains a poorly understood complication in HIV-TB patients receiving antiretroviral therapy (ART). TB-IRIS could be associated with an exaggerated immune response to TB-antigens. We compared the recovery of IFNγ responses to recall and TB-antigens and explored in vitro innate cytokine production in TB-IRIS patients. Methods In a prospective cohort study of HIV-TB co-infected patients treated for TB before ART initiation, we compared 18 patients who developed TB-IRIS with 18 non-IRIS controls matched for age, sex and CD4 count. We analyzed IFNγ ELISpot responses to CMV, influenza, TB and LPS before ART and during TB-IRIS. CMV and LPS stimulated ELISpot supernatants were subsequently evaluated for production of IL-12p70, IL-6, TNFα and IL-10 by Luminex. Results Before ART, all responses were similar between TB-IRIS patients and non-IRIS controls. During TB-IRIS, IFNγ responses to TB and influenza antigens were comparable between TB-IRIS patients and non-IRIS controls, but responses to CMV and LPS remained significantly lower in TB-IRIS patients. Production of innate cytokines was similar between TB-IRIS patients and non-IRIS controls. However, upon LPS stimulation, IL-6/IL-10 and TNFα/IL-10 ratios were increased in TB-IRIS patients compared to non-IRIS controls. Conclusion TB-IRIS patients did not display excessive IFNγ responses to TB-antigens. In contrast, the reconstitution of CMV and LPS responses was delayed in the TB-IRIS group. For LPS, this was linked with a pro-inflammatory shift in the innate cytokine balance. These data are in support of a prominent role of the innate immune system in TB-IRIS. PMID:25415590

  12. Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones

    PubMed Central

    Chen, Jing; Cao, Ruochen; Wang, Yongtian

    2015-01-01

    Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters. PMID:26690439

  13. Sensor-Aware Recognition and Tracking for Wide-Area Augmented Reality on Mobile Phones.

    PubMed

    Chen, Jing; Cao, Ruochen; Wang, Yongtian

    2015-12-10

    Wide-area registration in outdoor environments on mobile phones is a challenging task in mobile augmented reality fields. We present a sensor-aware large-scale outdoor augmented reality system for recognition and tracking on mobile phones. GPS and gravity information is used to improve the VLAD performance for recognition. A kind of sensor-aware VLAD algorithm, which is self-adaptive to different scale scenes, is utilized to recognize complex scenes. Considering vision-based registration algorithms are too fragile and tend to drift, data coming from inertial sensors and vision are fused together by an extended Kalman filter (EKF) to achieve considerable improvements in tracking stability and robustness. Experimental results show that our method greatly enhances the recognition rate and eliminates the tracking jitters.

  14. Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems

    DTIC Science & Technology

    2010-02-21

    RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d

  15. Investigating the performance of wavelet neural networks in ionospheric tomography using IGS data over Europe

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2017-04-01

    Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.

  16. Iris reconstruction combined with iris-claw intraocular lens implantation for the management of iris-lens injured patients.

    PubMed

    Hu, Shufang; Wang, Mingling; Xiao, Tianlin; Zhao, Zhenquan

    2016-03-01

    To study the efficiency and safety of iris reconstruction combined with iris-claw intraocular lens (IOL) implantation in the patients with iris-lens injuries. Retrospective, noncomparable consecutive case series study. Eleven patients (11 eyes) following iris-lens injuries underwent iris reconstructions combined with iris-claw IOL implantations. Clinical data, such as cause and time of injury, visual acuity (VA), iris and lens injuries, surgical intervention, follow-up period, corneal endothelial cell count, and optical coherence tomography, were collected. Uncorrected VA (UCVA) in all injured eyes before combined surgery was equal to or <20/1000. Within a 1.1-4.2-year follow-up period, a significant increase, equal to or better than 20/66, in UCVA was observed in six (55%) cases, and in best-corrected VA (BCVA) was observed in nine (82%) cases. Postoperative BCVA was 20/40 or better in seven cases (64%). After combined surgery, the iris returned to its natural round shape or smaller pupil, and the iris-claw IOLs in the 11 eyes were well-positioned on the anterior surface of reconstructed iris. No complications occurred in those patients. Iris reconstruction combined with iris-claw IOL implantation is a safe and efficient procedure for an eye with iris-lens injury in the absence of capsular support.

  17. Selection of Norway spruce somatic embryos by computer vision

    NASA Astrophysics Data System (ADS)

    Hamalainen, Jari J.; Jokinen, Kari J.

    1993-05-01

    A computer vision system was developed for the classification of plant somatic embryos. The embryos are in a Petri dish that is transferred with constant speed and they are recognized as they pass a line scan camera. A classification algorithm needs to be installed for every plant species. This paper describes an algorithm for the recognition of Norway spruce (Picea abies) embryos. A short review of conifer micropropagation by somatic embryogenesis is also given. The recognition algorithm is based on features calculated from the boundary of the object. Only part of the boundary corresponding to the developing cotyledons (2 - 15) and the straight sides of the embryo are used for recognition. An index of the length of the cotyledons describes the developmental stage of the embryo. The testing set for classifier performance consisted of 118 embryos and 478 nonembryos. With the classification tolerances chosen 69% of the objects classified as embryos by a human classifier were selected and 31$% rejected. Less than 1% of the nonembryos were classified as embryos. The basic features developed can probably be easily adapted for the recognition of other conifer somatic embryos.

  18. A fusion approach for coarse-to-fine target recognition

    NASA Astrophysics Data System (ADS)

    Folkesson, Martin; Grönwall, Christina; Jungert, Erland

    2006-04-01

    A fusion approach in a query based information system is presented. The system is designed for querying multimedia data bases, and here applied to target recognition using heterogeneous data sources. The recognition process is coarse-to-fine, with an initial attribute estimation step and a following matching step. Several sensor types and algorithms are involved in each of these two steps. An independence of the matching results, on the origin of the estimation results, is observed. It allows for distribution of data between algorithms in an intermediate fusion step, without risk of data incest. This increases the overall chance of recognising the target. An implementation of the system is described.

  19. Document Form and Character Recognition using SVM

    NASA Astrophysics Data System (ADS)

    Park, Sang-Sung; Shin, Young-Geun; Jung, Won-Kyo; Ahn, Dong-Kyu; Jang, Dong-Sik

    2009-08-01

    Because of development of computer and information communication, EDI (Electronic Data Interchange) has been developing. There is OCR (Optical Character Recognition) of Pattern recognition technology for EDI. OCR contributed to changing many manual in the past into automation. But for the more perfect database of document, much manual is needed for excluding unnecessary recognition. To resolve this problem, we propose document form based character recognition method in this study. Proposed method is divided into document form recognition part and character recognition part. Especially, in character recognition, change character into binarization by using SVM algorithm and extract more correct feature value.

  20. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    NASA Astrophysics Data System (ADS)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  1. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    NASA Astrophysics Data System (ADS)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  2. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    PubMed

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.

  3. Three-Dimensional Morphometric Analysis of the Iris by Swept-Source Anterior Segment Optical Coherence Tomography in a Caucasian Population.

    PubMed

    Invernizzi, Alessandro; Giardini, Piero; Cigada, Mario; Viola, Francesco; Staurenghi, Giovanni

    2015-07-01

    We analyzed by swept-source anterior segment optical coherence tomography (SS-ASOCT) the three-dimensional iris morphology in a Caucasian population, and correlated the findings with iris color, iris sectors, subject age, and sex. One eye each from consecutive healthy emmetropic (refractive spherical equivalent ± 3 diopters) volunteers were selected for the study. The enrolled eye underwent standardized anterior segment photography to assess iris color. Iris images were assessed by SS-ASOCT for volume, thickness, width, and pupil size. Sectoral variations of morphometric data among the superior, nasal, inferior, and temporal sectors were recorded. A total of 135 eyes from 57 males and 78 females, age 49 ± 17 years, fulfilled the inclusion criteria. All iris morphometric parameters varied significantly among the different sectors (all P < 0.0001). Iris total volume and thickness were significantly correlated with increasingly darker pigmentation (P < 0.0001, P = 0.0384, respectively). Neither width nor pupil diameter was influenced by iris color. Age did not affect iris volume or thickness; iris width increased and pupil diameter decreased with age (rs = 0.52, rs = -0.58, respectively). There was no effect of sex on iris volume, thickness, or pupil diameter; iris width was significantly greater in males (P = 0.007). Morphology of the iris varied by iris sector, and iris color was associated with differences in iris volume and thickness. Morphological parameter variations associated with iris color, sector, age, and sex can be used to identify pathological changes in suspect eyes. To be effective in clinical settings, construction of iris morphological databases for different ethnic and racial populations is essential.

  4. An improved silhouette for human pose estimation

    NASA Astrophysics Data System (ADS)

    Hawes, Anthony H.; Iftekharuddin, Khan M.

    2017-08-01

    We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.

  5. Optimization of internet content filtering-Combined with KNN and OCAT algorithms

    NASA Astrophysics Data System (ADS)

    Guo, Tianze; Wu, Lingjing; Liu, Jiaming

    2018-04-01

    The face of the status quo that rampant illegal content in the Internet, the result of traditional way to filter information, keyword recognition and manual screening, is getting worse. Based on this, this paper uses OCAT algorithm nested by KNN classification algorithm to construct a corpus training library that can dynamically learn and update, which can be improved on the filter corpus for constantly updated illegal content of the network, including text and pictures, and thus can better filter and investigate illegal content and its source. After that, the research direction will focus on the simplified updating of recognition and comparison algorithms and the optimization of the corpus learning ability in order to improve the efficiency of filtering, save time and resources.

  6. A Fuzzy Aproach For Facial Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Gîlcă, Gheorghe; Bîzdoacă, Nicu-George

    2015-09-01

    This article deals with an emotion recognition system based on the fuzzy sets. Human faces are detected in images with the Viola - Jones algorithm and for its tracking in video sequences we used the Camshift algorithm. The detected human faces are transferred to the decisional fuzzy system, which is based on the variable fuzzyfication measurements of the face: eyebrow, eyelid and mouth. The system can easily determine the emotional state of a person.

  7. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  8. Automated thematic mapping and change detection of ERTS-A images

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. In the first part of the investigation, spatial and spectral features were developed which were employed to automatically recognize terrain features through a clustering algorithm. In this part of the investigation, the size of the cell which is the number of digital picture elements used for computing the spatial and spectral features was varied. It was determined that the accuracy of terrain recognition decreases slowly as the cell size is reduced and coincides with increased cluster diffuseness. It was also proven that a cell size of 17 x 17 pixels when used with the clustering algorithm results in high recognition rates for major terrain classes. ERTS-1 data from five diverse geographic regions of the United States were processed through the clustering algorithm with 17 x 17 pixel cells. Simple land use maps were produced and the average terrain recognition accuracy was 82 percent.

  9. Component Pin Recognition Using Algorithms Based on Machine Learning

    NASA Astrophysics Data System (ADS)

    Xiao, Yang; Hu, Hong; Liu, Ze; Xu, Jiangchang

    2018-04-01

    The purpose of machine vision for a plug-in machine is to improve the machine’s stability and accuracy, and recognition of the component pin is an important part of the vision. This paper focuses on component pin recognition using three different techniques. The first technique involves traditional image processing using the core algorithm for binary large object (BLOB) analysis. The second technique uses the histogram of oriented gradients (HOG), to experimentally compare the effect of the support vector machine (SVM) and the adaptive boosting machine (AdaBoost) learning meta-algorithm classifiers. The third technique is the use of an in-depth learning method known as convolution neural network (CNN), which involves identifying the pin by comparing a sample to its training. The main purpose of the research presented in this paper is to increase the knowledge of learning methods used in the plug-in machine industry in order to achieve better results.

  10. A new computerized ionosphere tomography model using the mapping function and an application to the study of seismic-ionosphere disturbance

    NASA Astrophysics Data System (ADS)

    Kong, Jian; Yao, Yibin; Liu, Lei; Zhai, Changzhi; Wang, Zemin

    2016-08-01

    A new algorithm for ionosphere tomography using the mapping function is proposed in this paper. First, the new solution splits the integration process into four layers along the observation ray, and then, the single-layer model (SLM) is applied to each integration part using a mapping function. Next, the model parameters are estimated layer by layer with the Kalman filtering method by introducing the scale factor (SF) γ to solve the ill-posed problem. Finally, the inversed images of different layers are combined into the final CIT image. We utilized simulated data from 23 IGS GPS stations around Europe to verify the estimation accuracy of the new algorithm; the results show that the new CIT model has better accuracy than the SLM in dense data areas and the CIT residuals are more closely grouped. The stability of the new algorithm is discussed by analyzing model accuracy under different error levels (the max errors are 5TECU, 10TECU, 15TECU, respectively). In addition, the key preset parameter, SFγ , which is given by the International Reference Ionosphere model (IRI2012). The experiment is designed to test the sensitivity of the new algorithm to SF variations. The results show that the IRI2012 is capable of providing initial SF values. Also in this paper, the seismic-ionosphere disturbance (SID) of the 2011 Japan earthquake is studied using the new CIT algorithm. Combined with the TEC time sequence of Sat.15, we find that the SID occurrence time and reaction area are highly related to the main shock time and epicenter. According to CIT images, there is a clear vertical electron density upward movement (from the 150-km layer to the 450-km layer) during this SID event; however, the peak value areas in the different layers were different, which means that the horizontal movement velocity is not consistent among the layers. The potential physical triggering mechanism is also discussed in this paper. Compared with the SLM, the RMS of the new CIT model is improved by 16.78%, while the CIT model could provide the three-dimensional variation in the ionosphere.

  11. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  12. Adaptive signal processing at NOSC

    NASA Astrophysics Data System (ADS)

    Albert, T. R.

    1992-03-01

    Adaptive signal processing work at the Naval Ocean Systems Center (NOSC) dates back to the late 1960s. It began as an IR/IED project by John McCool, who made use of an adaptive algorithm that had been developed by Professor Bernard Widrow of Stanford University. In 1972, a team lead by McCool built the first hardware implementation of the algorithm that could process in real-time at acoustic bandwidths. Early tests with the two units that were built were extremely successful, and attracted much attention. Sponsors from different commands provided funding to develop hardware for submarine, surface ship, airborne, and other systems. In addition, an effort was initiated to analyze performance and behavior of the algorithm. Most of the hardware development and analysis efforts were active through the 1970s, and a few into the 1980s. One of the original programs continues to this date.

  13. Iris reconstruction combined with iris-claw intraocular lens implantation for the management of iris-lens injured patients

    PubMed Central

    Hu, Shufang; Wang, Mingling; Xiao, Tianlin; Zhao, Zhenquan

    2016-01-01

    Aim: To study the efficiency and safety of iris reconstruction combined with iris-claw intraocular lens (IOL) implantation in the patients with iris-lens injuries. Settings and Design: Retrospective, noncomparable consecutive case series study. Materials and Methods: Eleven patients (11 eyes) following iris-lens injuries underwent iris reconstructions combined with iris-claw IOL implantations. Clinical data, such as cause and time of injury, visual acuity (VA), iris and lens injuries, surgical intervention, follow-up period, corneal endothelial cell count, and optical coherence tomography, were collected. Results: Uncorrected VA (UCVA) in all injured eyes before combined surgery was equal to or <20/1000. Within a 1.1–4.2-year follow-up period, a significant increase, equal to or better than 20/66, in UCVA was observed in six (55%) cases, and in best-corrected VA (BCVA) was observed in nine (82%) cases. Postoperative BCVA was 20/40 or better in seven cases (64%). After combined surgery, the iris returned to its natural round shape or smaller pupil, and the iris-claw IOLs in the 11 eyes were well-positioned on the anterior surface of reconstructed iris. No complications occurred in those patients. Conclusions: Iris reconstruction combined with iris-claw IOL implantation is a safe and efficient procedure for an eye with iris-lens injury in the absence of capsular support. PMID:27146932

  14. Image preprocessing study on KPCA-based face recognition

    NASA Astrophysics Data System (ADS)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  15. User Activity Recognition in Smart Homes Using Pattern Clustering Applied to Temporal ANN Algorithm.

    PubMed

    Bourobou, Serge Thomas Mickala; Yoo, Younghwan

    2015-05-21

    This paper discusses the possibility of recognizing and predicting user activities in the IoT (Internet of Things) based smart environment. The activity recognition is usually done through two steps: activity pattern clustering and activity type decision. Although many related works have been suggested, they had some limited performance because they focused only on one part between the two steps. This paper tries to find the best combination of a pattern clustering method and an activity decision algorithm among various existing works. For the first step, in order to classify so varied and complex user activities, we use a relevant and efficient unsupervised learning method called the K-pattern clustering algorithm. In the second step, the training of smart environment for recognizing and predicting user activities inside his/her personal space is done by utilizing the artificial neural network based on the Allen's temporal relations. The experimental results show that our combined method provides the higher recognition accuracy for various activities, as compared with other data mining classification algorithms. Furthermore, it is more appropriate for a dynamic environment like an IoT based smart home.

  16. The software peculiarities of pattern recognition in track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starkov, N.

    The different kinds of nuclear track recognition algorithms are represented. Several complicated samples of use them in physical experiments are considered. The some processing methods of complicated images are described.

  17. Optical Coherence Tomography Angiography Features of Iris Racemose Hemangioma in 4 Cases.

    PubMed

    Chien, Jason L; Sioufi, Kareem; Ferenczy, Sandor; Say, Emil Anthony T; Shields, Carol L

    2017-10-01

    Optical coherence tomography angiography (OCTA) allows visualization of iris racemose hemangioma course and its relation to the normal iris microvasculature. To describe OCTA features of iris racemose hemangioma. Descriptive, noncomparative case series at a tertiary referral center (Ocular Oncology Service of Wills Eye Hospital). Patients diagnosed with unilateral iris racemose hemangioma were included in the study. Features of iris racemose hemangioma on OCTA. Four eyes of 4 patients with unilateral iris racemose hemangioma were included in the study. Mean patient age was 50 years, all patients were white, and Snellen visual acuity was 20/20 in each case. All eyes had sectoral iris racemose hemangioma without associated iris or ciliary body solid tumor on clinical examination and ultrasound biomicroscopy. By anterior segment OCT, the racemose hemangioma was partially visualized in all cases. By OCTA, the hemangioma was clearly visualized as a uniform large-caliber vascular tortuous loop with intense flow characteristics superimposed over small-caliber radial iris vessels against a background of low-signal iris stroma. The vascular course on OCTA resembled a light bulb filament (filament sign), arising from the peripheral iris (base of light bulb) and forming a tortuous loop on reaching its peak (midfilament) near the pupil (n = 3) or midzonal iris (n = 1), before returning to the peripheral iris (base of light bulb). Intravenous fluorescein angiography performed in 1 eye depicted the iris hemangioma; however, small-caliber radial iris vessels were more distinct on OCTA than intravenous fluorescein angiography. Optical coherence tomography angiography is a noninvasive vascular imaging modality that clearly depicts the looping course of iris racemose hemangioma. Optical coherence tomography angiography depicted fine details of radial iris vessels, not distinct on intravenous fluorescein angiography.

  18. Iris Crypts Influence Dynamic Changes of Iris Volume.

    PubMed

    Chua, Jacqueline; Thakku, Sri Gowtham; Tun, Tin A; Nongpiur, Monisha E; Tan, Marcus Chiang Lee; Girard, Michael J A; Wong, Tien Yin; Quah, Joanne Hui Min; Aung, Tin; Cheng, Ching-Yu

    2016-10-01

    To determine the association of iris surface features with iris volume change after physiologic pupil dilation in adults. Cross-sectional observational study. Chinese adults aged ≥ 50 years without ocular diseases. Digital iris photographs were taken from eyes of each participant and graded for crypts (by number and size) and furrows (by number and circumferential extent) following a standardized grading scheme. Iris color was measured objectively, using the Commission Internationale de l'Eclairage (CIE) L* color parameter (higher value denoting lighter iris). The anterior segment was imaged by swept-source optical coherence tomography (SS-OCT) (Casia; Tomey, Nagoya, Japan) under bright light and dark room conditions. Iris volumes in light and dark conditions were measured with custom semiautomated software, and the change in iris volume was quantified. Associations of the change in iris volume after pupil dilation with underlying iris surface features in right eyes were assessed using linear regression analysis. Iris volume change after physiologic pupil dilation from light to dark condition. A total of 65 Chinese participants (mean age, 59.8±5.7 years) had gradable data for iris surface features. In light condition, higher iris crypt grade was associated independently with smaller iris volume (β [change in iris volume in millimeters per crypt grade increment] = -1.43, 95% confidence interval [CI], -2.26 to -0.59; P = 0.001) and greater reduction of iris volume on pupil dilation (β [change in iris volume in millimeters per crypt grade increment] = 0.23, 95% CI, 0.06-0.40; P = 0.010), adjusting for age, gender, presence of corneal arcus, and change in pupil size. Iris furrows and iris color were not associated with iris volume in light condition or change in iris volume (all P > 0.05). Although few Chinese persons have multiple crypts on their irides, irides with more crypts were significantly thinner and lost more volume on pupil dilation. In view that the latter feature is known to be protective for acute angle-closure attack, it is likely that the macroscopic and microscopic composition of the iris is a contributing feature to angle-closure disease. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  19. Log-Gabor Weber descriptor for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  20. Neural CMOS-integrated circuit and its application to data classification.

    PubMed

    Göknar, Izzet Cem; Yildiz, Merih; Minaei, Shahram; Deniz, Engin

    2012-05-01

    Implementation and new applications of a tunable complementary metal-oxide-semiconductor-integrated circuit (CMOS-IC) of a recently proposed classifier core-cell (CC) are presented and tested with two different datasets. With two algorithms-one based on Fisher's linear discriminant analysis and the other based on perceptron learning, used to obtain CCs' tunable parameters-the Haberman and Iris datasets are classified. The parameters so obtained are used for hard-classification of datasets with a neural network structured circuit. Classification performance and coefficient calculation times for both algorithms are given. The CC has 6-ns response time and 1.8-mW power consumption. The fabrication parameters used for the IC are taken from CMOS AMS 0.35-μm technology.

  1. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  2. Results in Combined Cataract Surgery With Prosthetic Iris Implantation in Patients With Previous Iridocyclectomy for Iris Melanoma.

    PubMed

    Snyder, Michael E; Osher, Robert H; Wladecki, Trisha M; Perez, Mauricio A; Augsburger, James J; Corrêa, Zélia

    2017-03-01

    To present visual and functional results following implantation of iris prosthesis combined with cataract surgery in eyes with previous iridocyclectomy for iris melanoma or presumed iris melanoma. Retrospective noncomparative case series. Sixteen patients (16 eyes) with iris defects after iridocyclectomy for iris melanoma in 15 cases and iris adenoma in 1 case underwent prosthetic iris device implantation surgery. Prosthetic iris implantation was combined with phacoemulsification and intraocular lens (IOL) implantation. The visual acuity, subjective glare and photophobia reduction, anatomic outcome, and complications were reviewed. Best-corrected visual acuity was improved in 13 eyes (81.25%), remained stable in 2 eyes (12.25%), and decreased in 1 eye (6.25%). Photophobia and glare improved in every case except for 1 (93.75%). Notably, after surgery 12 patients (75.00%) reported no photophobia and 10 patients (62.50%) reported no glare. The median postoperative follow-up was 29.5 months, with a minimum of 5 months and a maximum of 189 months. All iris devices were in the correct position, and all eyes achieved the desired anatomic result. The IOL optic edges were covered in all areas by either residual iris or opaque portions of a prosthetic iris device. In patients who have undergone previous iridocyclectomy for presumed iris melanoma, combined cataract surgery and iris prosthesis placement, with or without iris reconstruction, can lead to visual improvement as well as reduction of both glare and photophobia. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Application of an auditory model to speech recognition.

    PubMed

    Cohen, J R

    1989-06-01

    Some aspects of auditory processing are incorporated in a front end for the IBM speech-recognition system [F. Jelinek, "Continuous speech recognition by statistical methods," Proc. IEEE 64 (4), 532-556 (1976)]. This new process includes adaptation, loudness scaling, and mel warping. Tests show that the design is an improvement over previous algorithms.

  4. Analysis of contour images using optics of spiral beams

    NASA Astrophysics Data System (ADS)

    Volostnikov, V. G.; Kishkin, S. A.; Kotova, S. P.

    2018-03-01

    An approach is outlined to the recognition of contour images using computer technology based on coherent optics principles. A mathematical description of the recognition process algorithm and the results of numerical modelling are presented. The developed approach to the recognition of contour images using optics of spiral beams is described and justified.

  5. Gate-Level Commercial Microelectronics Verification with Standard Cell Recognition

    DTIC Science & Technology

    2015-03-26

    21 2.2.1.4 Algorithm Insufficiencies as Applied to DARPA’s Cir- cuit Verification Efforts . . . . . . . . . . . . . . . . . . 22 vi Page...58 4.2 Discussion of SCR Algorithm and Code . . . . . . . . . . . . . . . . . . . 91 4.2.1 Explication of SCR Algorithm ...93 4.2.2 Algorithm Attributes . . . . . . . . . . . . . . . . . . . . . . . . . 118 4.3 Advantages of Transistor-level Verification with SCR

  6. ECG Sensor Card with Evolving RBP Algorithms for Human Verification.

    PubMed

    Tseng, Kuo-Kun; Huang, Huang-Nan; Zeng, Fufu; Tu, Shu-Yi

    2015-08-21

    It is known that cardiac and respiratory rhythms in electrocardiograms (ECGs) are highly nonlinear and non-stationary. As a result, most traditional time-domain algorithms are inadequate for characterizing the complex dynamics of the ECG. This paper proposes a new ECG sensor card and a statistical-based ECG algorithm, with the aid of a reduced binary pattern (RBP), with the aim of achieving faster ECG human identity recognition with high accuracy. The proposed algorithm has one advantage that previous ECG algorithms lack-the waveform complex information and de-noising preprocessing can be bypassed; therefore, it is more suitable for non-stationary ECG signals. Experimental results tested on two public ECG databases (MIT-BIH) from MIT University confirm that the proposed scheme is feasible with excellent accuracy, low complexity, and speedy processing. To be more specific, the advanced RBP algorithm achieves high accuracy in human identity recognition and is executed at least nine times faster than previous algorithms. Moreover, based on the test results from a long-term ECG database, the evolving RBP algorithm also demonstrates superior capability in handling long-term and non-stationary ECG signals.

  7. Development and testing of incident detection algorithms. Vol. 2, research methodology and detailed results.

    DOT National Transportation Integrated Search

    1976-04-01

    The development and testing of incident detection algorithms was based on Los Angeles and Minneapolis freeway surveillance data. Algorithms considered were based on times series and pattern recognition techniques. Attention was given to the effects o...

  8. Ambulance Clinical Triage for Acute Stroke Treatment: Paramedic Triage Algorithm for Large Vessel Occlusion.

    PubMed

    Zhao, Henry; Pesavento, Lauren; Coote, Skye; Rodrigues, Edrich; Salvaris, Patrick; Smith, Karen; Bernard, Stephen; Stephenson, Michael; Churilov, Leonid; Yassi, Nawaf; Davis, Stephen M; Campbell, Bruce C V

    2018-04-01

    Clinical triage scales for prehospital recognition of large vessel occlusion (LVO) are limited by low specificity when applied by paramedics. We created the 3-step ambulance clinical triage for acute stroke treatment (ACT-FAST) as the first algorithmic LVO identification tool, designed to improve specificity by recognizing only severe clinical syndromes and optimizing paramedic usability and reliability. The ACT-FAST algorithm consists of (1) unilateral arm drift to stretcher <10 seconds, (2) severe language deficit (if right arm is weak) or gaze deviation/hemineglect assessed by simple shoulder tap test (if left arm is weak), and (3) eligibility and stroke mimic screen. ACT-FAST examination steps were retrospectively validated, and then prospectively validated by paramedics transporting culturally and linguistically diverse patients with suspected stroke in the emergency department, for the identification of internal carotid or proximal middle cerebral artery occlusion. The diagnostic performance of the full ACT-FAST algorithm was then validated for patients accepted for thrombectomy. In retrospective (n=565) and prospective paramedic (n=104) validation, ACT-FAST displayed higher overall accuracy and specificity, when compared with existing LVO triage scales. Agreement of ACT-FAST between paramedics and doctors was excellent (κ=0.91; 95% confidence interval, 0.79-1.0). The full ACT-FAST algorithm (n=60) assessed by paramedics showed high overall accuracy (91.7%), sensitivity (85.7%), specificity (93.5%), and positive predictive value (80%) for recognition of endovascular-eligible LVO. The 3-step ACT-FAST algorithm shows higher specificity and reliability than existing scales for clinical LVO recognition, despite requiring just 2 examination steps. The inclusion of an eligibility step allowed recognition of endovascular-eligible patients with high accuracy. Using a sequential algorithmic approach eliminates scoring confusion and reduces assessment time. Future studies will test whether field application of ACT-FAST by paramedics to bypass suspected patients with LVO directly to endovascular-capable centers can reduce delays to endovascular thrombectomy. © 2018 American Heart Association, Inc.

  9. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  10. Illumination-invariant hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Mendoza-Morales, América I.; Miramontes-Jaramillo, Daniel; Kober, Vitaly

    2015-09-01

    In recent years, human-computer interaction (HCI) has received a lot of interest in industry and science because it provides new ways to interact with modern devices through voice, body, and facial/hand gestures. The application range of the HCI is from easy control of home appliances to entertainment. Hand gesture recognition is a particularly interesting problem because the shape and movement of hands usually are complex and flexible to be able to codify many different signs. In this work we propose a three step algorithm: first, detection of hands in the current frame is carried out; second, hand tracking across the video sequence is performed; finally, robust recognition of gestures across subsequent frames is made. Recognition rate highly depends on non-uniform illumination of the scene and occlusion of hands. In order to overcome these issues we use two Microsoft Kinect devices utilizing combined information from RGB and infrared sensors. The algorithm performance is tested in terms of recognition rate and processing time.

  11. [Research progress of multi-model medical image fusion and recognition].

    PubMed

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  12. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  13. An improved CS-LSSVM algorithm-based fault pattern recognition of ship power equipments.

    PubMed

    Yang, Yifei; Tan, Minjia; Dai, Yuewei

    2017-01-01

    A ship power equipments' fault monitoring signal usually provides few samples and the data's feature is non-linear in practical situation. This paper adopts the method of the least squares support vector machine (LSSVM) to deal with the problem of fault pattern identification in the case of small sample data. Meanwhile, in order to avoid involving a local extremum and poor convergence precision which are induced by optimizing the kernel function parameter and penalty factor of LSSVM, an improved Cuckoo Search (CS) algorithm is proposed for the purpose of parameter optimization. Based on the dynamic adaptive strategy, the newly proposed algorithm improves the recognition probability and the searching step length, which can effectively solve the problems of slow searching speed and low calculation accuracy of the CS algorithm. A benchmark example demonstrates that the CS-LSSVM algorithm can accurately and effectively identify the fault pattern types of ship power equipments.

  14. Real time biometric surveillance with gait recognition

    NASA Astrophysics Data System (ADS)

    Mohapatra, Subasish; Swain, Anisha; Das, Manaswini; Mohanty, Subhadarshini

    2018-04-01

    Bio metric surveillance has become indispensable for every system in the recent years. The contribution of bio metric authentication, identification, and screening purposes are widely used in various domains for preventing unauthorized access. A large amount of data needs to be updated, segregated and safeguarded from malicious software and misuse. Bio metrics is the intrinsic characteristics of each individual. Recently fingerprints, iris, passwords, unique keys, and cards are commonly used for authentication purposes. These methods have various issues related to security and confidentiality. These systems are not yet automated to provide the safety and security. The gait recognition system is the alternative for overcoming the drawbacks of the recent bio metric based authentication systems. Gait recognition is newer as it hasn't been implemented in the real-world scenario so far. This is an un-intrusive system that requires no knowledge or co-operation of the subject. Gait is a unique behavioral characteristic of every human being which is hard to imitate. The walking style of an individual teamed with the orientation of joints in the skeletal structure and inclinations between them imparts the unique characteristic. A person can alter one's own external appearance but not skeletal structure. These are real-time, automatic systems that can even process low-resolution images and video frames. In this paper, we have proposed a gait recognition system and compared the performance with conventional bio metric identification systems.

  15. Tongue prints: A novel biometric and potential forensic tool.

    PubMed

    Radhika, T; Jeddy, Nadeem; Nithya, S

    2016-01-01

    Tongue is a vital internal organ well encased within the oral cavity and protected from the environment. It has unique features which differ from individual to individual and even between identical twins. The color, shape, and surface features are characteristic of every individual, and this serves as a tool for identification. Many modes of biometric systems have come into existence such as fingerprint, iris scan, skin color, signature verification, voice recognition, and face recognition. The search for a new personal identification method secure has led to the use of the lingual impression or the tongue print as a method of biometric authentication. Tongue characteristics exhibit sexual dimorphism thus aiding in the identification of the person. Emerging as a novel biometric tool, tongue prints also hold the promise of a potential forensic tool. This review highlights the uniqueness of tongue prints and its superiority over other biometric identification systems. The various methods of tongue print collection and the classification of tongue features are also elucidated.

  16. Training Spiking Neural Models Using Artificial Bee Colony

    PubMed Central

    Vazquez, Roberto A.; Garro, Beatriz A.

    2015-01-01

    Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy. PMID:25709644

  17. Two-dimensional statistical linear discriminant analysis for real-time robust vehicle-type recognition

    NASA Astrophysics Data System (ADS)

    Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.

    2007-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.

  18. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation maymore » decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.« less

  19. Plastic surgery and the biometric e-passport: implications for facial recognition.

    PubMed

    Ologunde, Rele

    2015-04-01

    This correspondence comments on the challenges of plastic reconstructive and aesthetic surgery on the facial recognition algorithms employed by biometric passports. The limitations of facial recognition technology in patients who have undergone facial plastic surgery are also discussed. Finally, the advice of the UK HM passport office to people who undergo facial surgery is reported.

  20. Pattern recognition: A basis for remote sensing data analysis

    NASA Technical Reports Server (NTRS)

    Swain, P. H.

    1973-01-01

    The theoretical basis for the pattern-recognition-oriented algorithms used in the multispectral data analysis software system is discussed. A model of a general pattern recognition system is presented. The receptor or sensor is usually a multispectral scanner. For each ground resolution element the receptor produces n numbers or measurements corresponding to the n channels of the scanner.

Top