Sample records for image pattern recognition

  1. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  2. Sub-pattern based multi-manifold discriminant analysis for face recognition

    NASA Astrophysics Data System (ADS)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  3. New Optical Transforms For Statistical Image Recognition

    NASA Astrophysics Data System (ADS)

    Lee, Sing H.

    1983-12-01

    In optical implementation of statistical image recognition, new optical transforms on large images for real-time recognition are of special interest. Several important linear transformations frequently used in statistical pattern recognition have now been optically implemented, including the Karhunen-Loeve transform (KLT), the Fukunaga-Koontz transform (FKT) and the least-squares linear mapping technique (LSLMT).1-3 The KLT performs principle components analysis on one class of patterns for feature extraction. The FKT performs feature extraction for separating two classes of patterns. The LSLMT separates multiple classes of patterns by maximizing the interclass differences and minimizing the intraclass variations.

  4. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  5. Proceedings of the Second Annual Symposium on Mathematical Pattern Recognition and Image Analysis Program

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr. (Principal Investigator)

    1984-01-01

    Several papers addressing image analysis and pattern recognition techniques for satellite imagery are presented. Texture classification, image rectification and registration, spatial parameter estimation, and surface fitting are discussed.

  6. The recognition of graphical patterns invariant to geometrical transformation of the models

    NASA Astrophysics Data System (ADS)

    Ileană, Ioan; Rotar, Corina; Muntean, Maria; Ceuca, Emilian

    2010-11-01

    In case that a pattern recognition system is used for images recognition (in robot vision, handwritten recognition etc.), the system must have the capacity to identify an object indifferently of its size or position in the image. The problem of the invariance of recognition can be approached in some fundamental modes. One may apply the similarity criterion used in associative recall. The original pattern is replaced by a mathematical transform that assures some invariance (e.g. the value of two-dimensional Fourier transformation is translation invariant, the value of Mellin transformation is scale invariant). In a different approach the original pattern is represented through a set of features, each of them being coded indifferently of the position, orientation or position of the pattern. Generally speaking, it is easy to obtain invariance in relation with one transformation group, but is difficult to obtain simultaneous invariance at rotation, translation and scale. In this paper we analyze some methods to achieve invariant recognition of images, particularly for digit images. A great number of experiments are due and the conclusions are underplayed in the paper.

  7. Digital and optical shape representation and pattern recognition; Proceedings of the Meeting, Orlando, FL, Apr. 4-6, 1988

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Editor)

    1988-01-01

    The present conference discusses topics in pattern-recognition correlator architectures, digital stereo systems, geometric image transformations and their applications, topics in pattern recognition, filter algorithms, object detection and classification, shape representation techniques, and model-based object recognition methods. Attention is given to edge-enhancement preprocessing using liquid crystal TVs, massively-parallel optical data base management, three-dimensional sensing with polar exponential sensor arrays, the optical processing of imaging spectrometer data, hybrid associative memories and metric data models, the representation of shape primitives in neural networks, and the Monte Carlo estimation of moment invariants for pattern recognition.

  8. Image-based automatic recognition of larvae

    NASA Astrophysics Data System (ADS)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  9. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  10. Proceedings of the NASA Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1983-01-01

    The application of mathematical and statistical analyses techniques to imagery obtained by remote sensors is described by Principal Investigators. Scene-to-map registration, geometric rectification, and image matching are among the pattern recognition aspects discussed.

  11. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  12. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  13. Convolution Comparison Pattern: An Efficient Local Image Descriptor for Fingerprint Liveness Detection

    PubMed Central

    Gottschlich, Carsten

    2016-01-01

    We present a new type of local image descriptor which yields binary patterns from small image patches. For the application to fingerprint liveness detection, we achieve rotation invariant image patches by taking the fingerprint segmentation and orientation field into account. We compute the discrete cosine transform (DCT) for these rotation invariant patches and attain binary patterns by comparing pairs of two DCT coefficients. These patterns are summarized into one or more histograms per image. Each histogram comprises the relative frequencies of pattern occurrences. Multiple histograms are concatenated and the resulting feature vector is used for image classification. We name this novel type of descriptor convolution comparison pattern (CCP). Experimental results show the usefulness of the proposed CCP descriptor for fingerprint liveness detection. CCP outperforms other local image descriptors such as LBP, LPQ and WLD on the LivDet 2013 benchmark. The CCP descriptor is a general type of local image descriptor which we expect to prove useful in areas beyond fingerprint liveness detection such as biological and medical image processing, texture recognition, face recognition and iris recognition, liveness detection for face and iris images, and machine vision for surface inspection and material classification. PMID:26844544

  14. Degraded character recognition based on gradient pattern

    NASA Astrophysics Data System (ADS)

    Babu, D. R. Ramesh; Ravishankar, M.; Kumar, Manish; Wadera, Kevin; Raj, Aakash

    2010-02-01

    Degraded character recognition is a challenging problem in the field of Optical Character Recognition (OCR). The performance of an optical character recognition depends upon printed quality of the input documents. Many OCRs have been designed which correctly identifies the fine printed documents. But, very few reported work has been found on the recognition of the degraded documents. The efficiency of the OCRs system decreases if the input image is degraded. In this paper, a novel approach based on gradient pattern for recognizing degraded printed character is proposed. The approach makes use of gradient pattern of an individual character for recognition. Experiments were conducted on character image that is either digitally written or a degraded character extracted from historical documents and the results are found to be satisfactory.

  15. Image processing and recognition for biological images.

    PubMed

    Uchida, Seiichi

    2013-05-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. © 2013 The Author Development, Growth & Differentiation © 2013 Japanese Society of Developmental Biologists.

  16. Basics of identification measurement technology

    NASA Astrophysics Data System (ADS)

    Klikushin, Yu N.; Kobenko, V. Yu; Stepanov, P. P.

    2018-01-01

    All available algorithms and suitable for pattern recognition do not give 100% guarantee, therefore there is a field of scientific night activity in this direction, studies are relevant. It is proposed to develop existing technologies for pattern recognition in the form of application of identification measurements. The purpose of the study is to identify the possibility of recognizing images using identification measurement technologies. In solving problems of pattern recognition, neural networks and hidden Markov models are mainly used. A fundamentally new approach to the solution of problems of pattern recognition based on the technology of identification signal measurements (IIS) is proposed. The essence of IIS technology is the quantitative evaluation of the shape of images using special tools and algorithms.

  17. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  18. A fast, robust pattern recognition asystem for low light level image registration and its application to retinal imaging

    NASA Astrophysics Data System (ADS)

    Wade, Alex Robert; Fitzke, Frederick W.

    1998-08-01

    We describe an image processing system which we have developed to align autofluorescence and high-magnification images taken with a laser scanning ophthalmoscope. The low signal to noise ratio of these images makes pattern recognition a non-trivial task. However, once n images are aligned and averaged, the noise levels drop by a factor of n and the image quality is improved. We include examples of autofluorescence images and images of the cone photoreceptor mosaic obtained using this system.

  19. Spatially Invariant Vector Quantization: A pattern matching algorithm for multiple classes of image subject matter including pathology.

    PubMed

    Hipp, Jason D; Cheng, Jerome Y; Toner, Mehmet; Tompkins, Ronald G; Balis, Ulysses J

    2011-02-26

    HISTORICALLY, EFFECTIVE CLINICAL UTILIZATION OF IMAGE ANALYSIS AND PATTERN RECOGNITION ALGORITHMS IN PATHOLOGY HAS BEEN HAMPERED BY TWO CRITICAL LIMITATIONS: 1) the availability of digital whole slide imagery data sets and 2) a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system) independence and extreme ease of use, without the need for specialized training or expertise. In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ), that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ) pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively incorporate their use into clinical workflow, as a turnkey solution. We anticipate that SIVQ, and other related class-independent pattern recognition algorithms, will become part of the overall armamentarium of digital image analysis approaches that are immediately available to practicing pathologists, without the need for the immediate availability of an image analysis expert.

  20. Terrain type recognition using ERTS-1 MSS images

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N.

    1973-01-01

    For the automatic recognition of earth resources from ERTS-1 digital tapes, both multispectral and spatial pattern recognition techniques are important. Recognition of terrain types is based on spatial signatures that become evident by processing small portions of an image through selected algorithms. An investigation of spatial signatures that are applicable to ERTS-1 MSS images is described. Artifacts in the spatial signatures seem to be related to the multispectral scanner. A method for suppressing such artifacts is presented. Finally, results of terrain type recognition for one ERTS-1 image are presented.

  1. Multi-texture local ternary pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Essa, Almabrok; Asari, Vijayan

    2017-05-01

    In imagery and pattern analysis domain a variety of descriptors have been proposed and employed for different computer vision applications like face detection and recognition. Many of them are affected under different conditions during the image acquisition process such as variations in illumination and presence of noise, because they totally rely on the image intensity values to encode the image information. To overcome these problems, a novel technique named Multi-Texture Local Ternary Pattern (MTLTP) is proposed in this paper. MTLTP combines the edges and corners based on the local ternary pattern strategy to extract the local texture features of the input image. Then returns a spatial histogram feature vector which is the descriptor for each image that we use to recognize a human being. Experimental results using a k-nearest neighbors classifier (k-NN) on two publicly available datasets justify our algorithm for efficient face recognition in the presence of extreme variations of illumination/lighting environments and slight variation of pose conditions.

  2. Pattern recognition for passive polarimetric data using nonparametric classifiers

    NASA Astrophysics Data System (ADS)

    Thilak, Vimal; Saini, Jatinder; Voelz, David G.; Creusere, Charles D.

    2005-08-01

    Passive polarization based imaging is a useful tool in computer vision and pattern recognition. A passive polarization imaging system forms a polarimetric image from the reflection of ambient light that contains useful information for computer vision tasks such as object detection (classification) and recognition. Applications of polarization based pattern recognition include material classification and automatic shape recognition. In this paper, we present two target detection algorithms for images captured by a passive polarimetric imaging system. The proposed detection algorithms are based on Bayesian decision theory. In these approaches, an object can belong to one of any given number classes and classification involves making decisions that minimize the average probability of making incorrect decisions. This minimum is achieved by assigning an object to the class that maximizes the a posteriori probability. Computing a posteriori probabilities requires estimates of class conditional probability density functions (likelihoods) and prior probabilities. A Probabilistic neural network (PNN), which is a nonparametric method that can compute Bayes optimal boundaries, and a -nearest neighbor (KNN) classifier, is used for density estimation and classification. The proposed algorithms are applied to polarimetric image data gathered in the laboratory with a liquid crystal-based system. The experimental results validate the effectiveness of the above algorithms for target detection from polarimetric data.

  3. Face recognition system and method using face pattern words and face pattern bytes

    DOEpatents

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  4. Iris recognition based on key image feature extraction.

    PubMed

    Ren, X; Tian, Q; Zhang, J; Wu, S; Zeng, Y

    2008-01-01

    In iris recognition, feature extraction can be influenced by factors such as illumination and contrast, and thus the features extracted may be unreliable, which can cause a high rate of false results in iris pattern recognition. In order to obtain stable features, an algorithm was proposed in this paper to extract key features of a pattern from multiple images. The proposed algorithm built an iris feature template by extracting key features and performed iris identity enrolment. Simulation results showed that the selected key features have high recognition accuracy on the CASIA Iris Set, where both contrast and illumination variance exist.

  5. Fundamental remote sensing science research program. Part 1: Status report of the mathematical pattern recognition and image analysis project

    NASA Technical Reports Server (NTRS)

    Heydorn, R. D.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of the Earth from remotely sensed measurement of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inference about the Earth.

  6. Incoherent optical generalized Hough transform: pattern recognition and feature extraction applications

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel; Ferrari, José A.

    2017-05-01

    Pattern recognition and feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital-only methods. We explore an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a pupil mask implemented on a high-contrast spatial light modulator for orientation/shape variation of the template. Real-time can also be achieved. In addition, by thresholding of the GHT and optically inverse transforming, the previously detected features of interest can be extracted.

  7. Comparison of eye imaging pattern recognition using neural network

    NASA Astrophysics Data System (ADS)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  8. Proceedings of the NASA/MPRIA Workshop: Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1983-01-01

    Outlines of talks presented at the workshop conducted at Texas A & M University on February 3 and 4, 1983 are presented. Emphasis was given to the application of Mathematics to image processing and pattern recognition.

  9. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping.

    PubMed

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-07-27

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.

  10. Fourier transform magnitudes are unique pattern recognition templates.

    PubMed

    Gardenier, P H; McCallum, B C; Bates, R H

    1986-01-01

    Fourier transform magnitudes are commonly used in the generation of templates in pattern recognition applications. We report on recent advances in Fourier phase retrieval which are relevant to pattern recognition. We emphasise in particular that the intrinsic form of a finite, positive image is, in general, uniquely related to the magnitude of its Fourier transform. We state conditions under which the Fourier phase can be reconstructed from samples of the Fourier magnitude, and describe a method of achieving this. Computational examples of restoration of Fourier phase (and hence, by Fourier transformation, the intrinsic form of the image) from samples of the Fourier magnitude are also presented.

  11. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    PubMed

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  13. Optical and digital pattern recognition; Proceedings of the Meeting, Los Angeles, CA, Jan. 13-15, 1987

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)

    1987-01-01

    The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.

  14. Recognition and classification of oscillatory patterns of electric brain activity using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Pchelintseva, Svetlana V.; Runnova, Anastasia E.; Musatov, Vyacheslav Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we study the problem of recognition type of the observed object, depending on the generated pattern and the registered EEG data. EEG recorded at the time of displaying cube Necker characterizes appropriate state of brain activity. As an image we use bistable image Necker cube. Subject selects the type of cube and interpret it either as aleft cube or as the right cube. To solve the problem of recognition, we use artificial neural networks. In our paper to create a classifier we have considered a multilayer perceptron. We examine the structure of the artificial neural network and define cubes recognition accuracy.

  15. Pattern recognition neural-net by spatial mapping of biology visual field

    NASA Astrophysics Data System (ADS)

    Lin, Xin; Mori, Masahiko

    2000-05-01

    The method of spatial mapping in biology vision field is applied to artificial neural networks for pattern recognition. By the coordinate transform that is called the complex-logarithm mapping and Fourier transform, the input images are transformed into scale- rotation- and shift- invariant patterns, and then fed into a multilayer neural network for learning and recognition. The results of computer simulation and an optical experimental system are described.

  16. Computer Vision for Artificially Intelligent Robotic Systems

    NASA Astrophysics Data System (ADS)

    Ma, Chialo; Ma, Yung-Lung

    1987-04-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts -- position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed bye the main control unit. In Pulse-Echo Signal Process Unit, we ultilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by u law coding method, and this data together with delay time T, angle information OH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Model, we use a narrow beam transducer and it's input voltage is 50V p-p. A RobOt equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  17. Improved Performance Characteristics For Indium Antimonide Photovoltaic Detector Arrays Using A FET-Switched Multiplexing Technique

    NASA Astrophysics Data System (ADS)

    Ma, Yung-Lung; Ma, Chialo

    1987-03-01

    In this paper An Acoustic Imaging Recognition System (AIRS) will be introduced which is installed on an Intelligent Robotic System and can recognize different type of Hand tools' by Dynamic pattern recognition. The dynamic pattern recognition is approached by look up table method in this case, the method can save a lot of calculation time and it is practicable. The Acoustic Imaging Recognition System (AIRS) is consist of four parts _ position control unit, pulse-echo signal processing unit, pattern recognition unit and main control unit. The position control of AIRS can rotate an angle of ±5 degree Horizental and Vertical seperately, the purpose of rotation is to find the maximum reflection intensity area, from the distance, angles and intensity of the target we can decide the characteristic of this target, of course all the decision is target, of course all the decision is processed by the main control unit. In Pulse-Echo Signal Process Unit, we utilize the correlation method, to overcome the limitation of short burst of ultrasonic, because the Correlation system can transmit large time bandwidth signals and obtain their resolution and increased intensity through pulse compression in the correlation receiver. The output of correlator is sampled and transfer into digital data by p law coding method, and this data together with delay time T, angle information eH, eV will be sent into main control unit for further analysis. The recognition process in this paper, we use dynamic look up table method, in this method at first we shall set up serval recognition pattern table and then the new pattern scanned by Transducer array will be devided into serval stages and compare with the sampling table. The comparison is implemented by dynamic programing and Markovian process. All the hardware control signals, such as optimum delay time for correlator receiver, horizental and vertical rotation angle for transducer plate, are controlled by the Main Control Unit, the Main Control Unit also handles the pattern recognition process. The distance from the target to the transducer plate is limitted by the power and beam angle of transducer elements, in this AIRS Models, we use a narrow beam transducer and it's input voltage is 50V p-p. A Robot equipped with AIRS can not only measure the distance from the target but also recognize a three dimensional image of target from the image lab of Robot memory. Indexitems, Accoustic System, Supersonic transducer, Dynamic programming, Look-up-table, Image process, pattern Recognition, Quad Tree, Quadappoach.

  18. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  19. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  20. Infrared and visible fusion face recognition based on NSCT domain

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  1. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  2. The Spatial Vision Tree: A Generic Pattern Recognition Engine- Scientific Foundations, Design Principles, and Preliminary Tree Design

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2010-01-01

    New foundational ideas are used to define a novel approach to generic visual pattern recognition. These ideas proceed from the starting point of the intrinsic equivalence of noise reduction and pattern recognition when noise reduction is taken to its theoretical limit of explicit matched filtering. This led us to think of the logical extension of sparse coding using basis function transforms for both de-noising and pattern recognition to the full pattern specificity of a lexicon of matched filter pattern templates. A key hypothesis is that such a lexicon can be constructed and is, in fact, a generic visual alphabet of spatial vision. Hence it provides a tractable solution for the design of a generic pattern recognition engine. Here we present the key scientific ideas, the basic design principles which emerge from these ideas, and a preliminary design of the Spatial Vision Tree (SVT). The latter is based upon a cryptographic approach whereby we measure a large aggregate estimate of the frequency of occurrence (FOO) for each pattern. These distributions are employed together with Hamming distance criteria to design a two-tier tree. Then using information theory, these same FOO distributions are used to define a precise method for pattern representation. Finally the experimental performance of the preliminary SVT on computer generated test images and complex natural images is assessed.

  3. DYNAMIC PATTERN RECOGNITION BY MEANS OF THRESHOLD NETS,

    DTIC Science & Technology

    A method is expounded for the recognition of visual patterns. A circuit diagram of a device is described which is based on a multilayer threshold ...structure synthesized in accordance with the proposed method. Coded signals received each time an image is displayed are transmitted to the threshold ...circuit which distinguishes the signs, and from there to the layers of threshold resolving elements. The image at each layer is made to correspond

  4. Automatic Target Recognition Based on Cross-Plot

    PubMed Central

    Wong, Kelvin Kian Loong; Abbott, Derek

    2011-01-01

    Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository. PMID:21980508

  5. Mars Rover imaging systems and directional filtering

    NASA Technical Reports Server (NTRS)

    Wang, Paul P.

    1989-01-01

    Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).

  6. Beyond sensory images: Object-based representation in the human ventral pathway

    PubMed Central

    Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.

    2004-01-01

    We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396

  7. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    NASA Technical Reports Server (NTRS)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  8. Pattern recognition applied to infrared images for early alerts in fog

    NASA Astrophysics Data System (ADS)

    Boucher, Vincent; Marchetti, Mario; Dumoulin, Jean; Cord, Aurélien

    2014-09-01

    Fog conditions are the cause of severe car accidents in western countries because of the poor induced visibility. Its forecast and intensity are still very difficult to predict by weather services. Infrared cameras allow to detect and to identify objects in fog while visibility is too low for eye detection. Over the past years, the implementation of cost effective infrared cameras on some vehicles has enabled such detection. On the other hand pattern recognition algorithms based on Canny filters and Hough transformation are a common tool applied to images. Based on these facts, a joint research program between IFSTTAR and Cerema has been developed to study the benefit of infrared images obtained in a fog tunnel during its natural dissipation. Pattern recognition algorithms have been applied, specifically on road signs which shape is usually associated to a specific meaning (circular for a speed limit, triangle for an alert, …). It has been shown that road signs were detected early enough in images, with respect to images in the visible spectrum, to trigger useful alerts for Advanced Driver Assistance Systems.

  9. Patterns recognition of electric brain activity using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Musatov, V. Yu.; Pchelintseva, S. V.; Runnova, A. E.; Hramov, A. E.

    2017-04-01

    An approach for the recognition of various cognitive processes in the brain activity in the perception of ambiguous images. On the basis of developed theoretical background and the experimental data, we propose a new classification of oscillating patterns in the human EEG by using an artificial neural network approach. After learning of the artificial neural network reliably identified cube recognition processes, for example, left-handed or right-oriented Necker cube with different intensity of their edges, construct an artificial neural network based on Perceptron architecture and demonstrate its effectiveness in the pattern recognition of the EEG in the experimental.

  10. Optical Pattern Recognition for Missile Guidance.

    DTIC Science & Technology

    1982-11-15

    directed to novel pattern recognition algo- rithms (that allow pattern recognition and object classification in the face of various geometrical and...I wats EF5 = 50) p.j/t’ni 2 (for btith image pat tern recognitio itas a preproicessing oiperatiton. Ini devices). TIhe rt’ad light intensity (0.33t mW...electrodes on its large faces . This Priz light modulator and the motivation for its devel- SLM is known as the Prom (Pockels real-time optical opment. In Sec

  11. The Pandora multi-algorithm approach to automated pattern recognition in LAr TPC detectors

    NASA Astrophysics Data System (ADS)

    Marshall, J. S.; Blake, A. S. T.; Thomson, M. A.; Escudero, L.; de Vries, J.; Weston, J.; MicroBooNE Collaboration

    2017-09-01

    The development and operation of Liquid Argon Time Projection Chambers (LAr TPCs) for neutrino physics has created a need for new approaches to pattern recognition, in order to fully exploit the superb imaging capabilities offered by this technology. The Pandora Software Development Kit provides functionality to aid the process of designing, implementing and running pattern recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition: individual algorithms each address a specific task in a particular topology; a series of many tens of algorithms then carefully builds-up a picture of the event. The input to the Pandora pattern recognition is a list of 2D Hits. The output from the chain of over 70 algorithms is a hierarchy of reconstructed 3D Particles, each with an identified particle type, vertex and direction.

  12. Implementation of sobel method to detect the seed rubber plant leaves

    NASA Astrophysics Data System (ADS)

    Suyanto; Munte, J.

    2018-03-01

    This research was conducted to develop a system that can identify and recognize the type of rubber tree based on the pattern of leaves of the plant. The steps research are started with the identification of the image data acquisition, image processing, image edge detection and identification method template matching. Edge detection is using Sobel edge detection. Pattern recognition would detect image as input and compared with other images in a database called templates. Experiments carried out in one phase, identification of the leaf edge, using a rubber plant leaf image 14 are superior and 5 for each type of test images (clones) of the plant. From the experimental results obtained by the recognition rate of 91.79%.

  13. Target recognition of log-polar ladar range images using moment invariants

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  14. A Novel Anti-Spoofing Solution for Iris Recognition Toward Cosmetic Contact Lens Attack Using Spectral ICA Analysis.

    PubMed

    Hsieh, Sheng-Hsun; Li, Yung-Hui; Wang, Wei; Tien, Chung-Hao

    2018-03-06

    In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme.

  15. Spatial Uncertainty Modeling of Fuzzy Information in Images for Pattern Classification

    PubMed Central

    Pham, Tuan D.

    2014-01-01

    The modeling of the spatial distribution of image properties is important for many pattern recognition problems in science and engineering. Mathematical methods are needed to quantify the variability of this spatial distribution based on which a decision of classification can be made in an optimal sense. However, image properties are often subject to uncertainty due to both incomplete and imprecise information. This paper presents an integrated approach for estimating the spatial uncertainty of vagueness in images using the theory of geostatistics and the calculus of probability measures of fuzzy events. Such a model for the quantification of spatial uncertainty is utilized as a new image feature extraction method, based on which classifiers can be trained to perform the task of pattern recognition. Applications of the proposed algorithm to the classification of various types of image data suggest the usefulness of the proposed uncertainty modeling technique for texture feature extraction. PMID:25157744

  16. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  17. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  18. Pattern recognition and expert image analysis systems in biomedical image processing (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Oosterlinck, A.; Suetens, P.; Wu, Q.; Baird, M.; F. M., C.

    1987-09-01

    This paper gives an overview of pattern recoanition techniques (P.R.) used in biomedical image processing and problems related to the different P.R. solutions. Also the use of knowledge based systems to overcome P.R. difficulties, is described. This is illustrated by a common example ofabiomedical image processing application.

  19. Identity Recognition Algorithm Using Improved Gabor Feature Selection of Gait Energy Image

    NASA Astrophysics Data System (ADS)

    Chao, LIANG; Ling-yao, JIA; Dong-cheng, SHI

    2017-01-01

    This paper describes an effective gait recognition approach based on Gabor features of gait energy image. In this paper, the kernel Fisher analysis combined with kernel matrix is proposed to select dominant features. The nearest neighbor classifier based on whitened cosine distance is used to discriminate different gait patterns. The approach proposed is tested on the CASIA and USF gait databases. The results show that our approach outperforms other state of gait recognition approaches in terms of recognition accuracy and robustness.

  20. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    PubMed Central

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  1. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    PubMed

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  2. Neural network-based system for pattern recognition through a fiber optic bundle

    NASA Astrophysics Data System (ADS)

    Gamo-Aranda, Javier; Rodriguez-Horche, Paloma; Merchan-Palacios, Miguel; Rosales-Herrera, Pablo; Rodriguez, M.

    2001-04-01

    A neural network based system to identify images transmitted through a Coherent Fiber-optic Bundle (CFB) is presented. Patterns are generated in a computer, displayed on a Spatial Light Modulator, imaged onto the input face of the CFB, and recovered optically by a CCD sensor array for further processing. Input and output optical subsystems were designed and used to that end. The recognition step of the transmitted patterns is made by a powerful, widely-used, neural network simulator running on the control PC. A complete PC-based interface was developed to control the different tasks involved in the system. An optical analysis of the system capabilities was carried out prior to performing the recognition step. Several neural network topologies were tested, and the corresponding numerical results are also presented and discussed.

  3. Fundamental remote science research program. Part 2: Status report of the mathematical pattern recognition and image analysis project

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.

    1984-01-01

    The Mathematical Pattern Recognition and Image Analysis (MPRIA) Project is concerned with basic research problems related to the study of he Earth from remotely sensed measurements of its surface characteristics. The program goal is to better understand how to analyze the digital image that represents the spatial, spectral, and temporal arrangement of these measurements for purposing of making selected inferences about the Earth. This report summarizes the progress that has been made toward this program goal by each of the principal investigators in the MPRIA Program.

  4. Geometry Of Discrete Sets With Applications To Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Sinha, Divyendu

    1990-03-01

    In this paper we present a new framework for discrete black and white images that employs only integer arithmetic. This framework is shown to retain the essential characteristics of the framework for Euclidean images. We propose two norms and based on them, the permissible geometric operations on images are defined. The basic invariants of our geometry are line images, structure of image and the corresponding local property of strong attachment of pixels. The permissible operations also preserve the 3x3 neighborhoods, area, and perpendicularity. The structure, patterns, and the inter-pattern gaps in a discrete image are shown to be conserved by the magnification and contraction process. Our notions of approximate congruence, similarity and symmetry are similar, in character, to the corresponding notions, for Euclidean images [1]. We mention two discrete pattern recognition algorithms that work purely with integers, and which fit into our framework. Their performance has been shown to be at par with the performance of traditional geometric schemes. Also, all the undesired effects of finite length registers in fixed point arithmetic that plague traditional algorithms, are non-existent in this family of algorithms.

  5. Hypothesis Support Mechanism for Mid-Level Visual Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Amador, Jose J (Inventor)

    2007-01-01

    A method of mid-level pattern recognition provides for a pose invariant Hough Transform by parametrizing pairs of points in a pattern with respect to at least two reference points, thereby providing a parameter table that is scale- or rotation-invariant. A corresponding inverse transform may be applied to test hypothesized matches in an image and a distance transform utilized to quantify the level of match.

  6. Scene Context Dependency of Pattern Constancy of Time Series Imagery

    NASA Technical Reports Server (NTRS)

    Woodell, Glenn A.; Jobson, Daniel J.; Rahman, Zia-ur

    2008-01-01

    A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure.

  7. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    NASA Astrophysics Data System (ADS)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  8. A standardization model based on image recognition for performance evaluation of an oral scanner.

    PubMed

    Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok

    2017-12-01

    Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.

  9. Finger vein verification system based on sparse representation.

    PubMed

    Xin, Yang; Liu, Zhi; Zhang, Haixia; Zhang, Hong

    2012-09-01

    Finger vein verification is a promising biometric pattern for personal identification in terms of security and convenience. The recognition performance of this technology heavily relies on the quality of finger vein images and on the recognition algorithm. To achieve efficient recognition performance, a special finger vein imaging device is developed, and a finger vein recognition method based on sparse representation is proposed. The motivation for the proposed method is that finger vein images exhibit a sparse property. In the proposed system, the regions of interest (ROIs) in the finger vein images are segmented and enhanced. Sparse representation and sparsity preserving projection on ROIs are performed to obtain the features. Finally, the features are measured for recognition. An equal error rate of 0.017% was achieved based on the finger vein image database, which contains images that were captured by using the near-IR imaging device that was developed in this study. The experimental results demonstrate that the proposed method is faster and more robust than previous methods.

  10. Automated phenotype pattern recognition of zebrafish for high-throughput screening.

    PubMed

    Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian

    2016-07-03

    Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.

  11. Pattern recognition and feature extraction with an optical Hough transform

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel

    2016-09-01

    Pattern recognition and localization along with feature extraction are image processing applications of great interest in defect inspection and robot vision among others. In comparison to purely digital methods, the attractiveness of optical processors for pattern recognition lies in their highly parallel operation and real-time processing capability. This work presents an optical implementation of the generalized Hough transform (GHT), a well-established technique for the recognition of geometrical features in binary images. Detection of a geometric feature under the GHT is accomplished by mapping the original image to an accumulator space; the large computational requirements for this mapping make the optical implementation an attractive alternative to digital- only methods. Starting from the integral representation of the GHT, it is possible to device an optical setup where the transformation is obtained, and the size and orientation parameters can be controlled, allowing for dynamic scale and orientation-variant pattern recognition. A compact system for the above purposes results from the use of an electrically tunable lens for scale control and a rotating pupil mask for orientation variation, implemented on a high-contrast spatial light modulator (SLM). Real-time (as limited by the frame rate of the device used to capture the GHT) can also be achieved, allowing for the processing of video sequences. Besides, by thresholding of the GHT (with the aid of another SLM) and inverse transforming (which is optically achieved in the incoherent system under appropriate focusing setting), the previously detected features of interest can be extracted.

  12. A Novel Anti-Spoofing Solution for Iris Recognition Toward Cosmetic Contact Lens Attack Using Spectral ICA Analysis

    PubMed Central

    Hsieh, Sheng-Hsun; Wang, Wei; Tien, Chung-Hao

    2018-01-01

    In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme. PMID:29509692

  13. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    NASA Astrophysics Data System (ADS)

    Acciarri, R.; Adams, C.; An, R.; Anthony, J.; Asaadi, J.; Auger, M.; Bagby, L.; Balasubramanian, S.; Baller, B.; Barnes, C.; Barr, G.; Bass, M.; Bay, F.; Bishai, M.; Blake, A.; Bolton, T.; Camilleri, L.; Caratelli, D.; Carls, B.; Castillo Fernandez, R.; Cavanna, F.; Chen, H.; Church, E.; Cianci, D.; Cohen, E.; Collin, G. H.; Conrad, J. M.; Convery, M.; Crespo-Anadón, J. I.; Del Tutto, M.; Devitt, D.; Dytman, S.; Eberly, B.; Ereditato, A.; Escudero Sanchez, L.; Esquivel, J.; Fadeeva, A. A.; Fleming, B. T.; Foreman, W.; Furmanski, A. P.; Garcia-Gamez, D.; Garvey, G. T.; Genty, V.; Goeldi, D.; Gollapinni, S.; Graf, N.; Gramellini, E.; Greenlee, H.; Grosso, R.; Guenette, R.; Hackenburg, A.; Hamilton, P.; Hen, O.; Hewes, J.; Hill, C.; Ho, J.; Horton-Smith, G.; Hourlier, A.; Huang, E.-C.; James, C.; Jan de Vries, J.; Jen, C.-M.; Jiang, L.; Johnson, R. A.; Joshi, J.; Jostlein, H.; Kaleko, D.; Karagiorgi, G.; Ketchum, W.; Kirby, B.; Kirby, M.; Kobilarcik, T.; Kreslo, I.; Laube, A.; Li, Y.; Lister, A.; Littlejohn, B. R.; Lockwitz, S.; Lorca, D.; Louis, W. C.; Luethi, M.; Lundberg, B.; Luo, X.; Marchionni, A.; Mariani, C.; Marshall, J.; Martinez Caicedo, D. A.; Meddage, V.; Miceli, T.; Mills, G. B.; Moon, J.; Mooney, M.; Moore, C. D.; Mousseau, J.; Murrells, R.; Naples, D.; Nienaber, P.; Nowak, J.; Palamara, O.; Paolone, V.; Papavassiliou, V.; Pate, S. F.; Pavlovic, Z.; Piasetzky, E.; Porzio, D.; Pulliam, G.; Qian, X.; Raaf, J. L.; Rafique, A.; Rochester, L.; Rudolf von Rohr, C.; Russell, B.; Schmitz, D. W.; Schukraft, A.; Seligman, W.; Shaevitz, M. H.; Sinclair, J.; Smith, A.; Snider, E. L.; Soderberg, M.; Söldner-Rembold, S.; Soleti, S. R.; Spentzouris, P.; Spitz, J.; St. John, J.; Strauss, T.; Szelc, A. M.; Tagg, N.; Terao, K.; Thomson, M.; Toups, M.; Tsai, Y.-T.; Tufanli, S.; Usher, T.; Van De Pontseele, W.; Van de Water, R. G.; Viren, B.; Weber, M.; Wickremasinghe, D. A.; Wolbers, S.; Wongjirad, T.; Woodruff, K.; Yang, T.; Yates, L.; Zeller, G. P.; Zennamo, J.; Zhang, C.

    2018-01-01

    The development and operation of liquid-argon time-projection chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens of algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.

  14. Automatic recognition of ship types from infrared images using superstructure moment invariants

    NASA Astrophysics Data System (ADS)

    Li, Heng; Wang, Xinyu

    2007-11-01

    Automatic object recognition is an active area of interest for military and commercial applications. In this paper, a system addressing autonomous recognition of ship types in infrared images is proposed. Firstly, an approach of segmentation based on detection of salient features of the target with subsequent shadow removing is proposed, as is the base of the subsequent object recognition. Considering the differences between the shapes of various ships mainly lie in their superstructures, we then use superstructure moment functions invariant to translation, rotation and scale differences in input patterns and develop a robust algorithm of obtaining ship superstructure. Subsequently a back-propagation neural network is used as a classifier in the recognition stage and projection images of simulated three-dimensional ship models are used as the training sets. Our recognition model was implemented and experimentally validated using both simulated three-dimensional ship model images and real images derived from video of an AN/AAS-44V Forward Looking Infrared(FLIR) sensor.

  15. Fuzzy difference-of-Gaussian-based iris recognition method for noisy iris images

    NASA Astrophysics Data System (ADS)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Moon, Kiyoung

    2010-06-01

    Iris recognition is used for information security with a high confidence level because it shows outstanding recognition accuracy by using human iris patterns with high degrees of freedom. However, iris recognition accuracy can be reduced by noisy iris images with optical and motion blurring. We propose a new iris recognition method based on the fuzzy difference-of-Gaussian (DOG) for noisy iris images. This study is novel in three ways compared to previous works: (1) The proposed method extracts iris feature values using the DOG method, which is robust to local variations of illumination and shows fine texture information, including various frequency components. (2) When determining iris binary codes, image noises that cause the quantization error of the feature values are reduced with the fuzzy membership function. (3) The optimal parameters of the DOG filter and the fuzzy membership function are determined in terms of iris recognition accuracy. Experimental results showed that the performance of the proposed method was better than that of previous methods for noisy iris images.

  16. Optical character recognition based on nonredundant correlation measurements.

    PubMed

    Braunecker, B; Hauck, R; Lohmann, A W

    1979-08-15

    The essence of character recognition is a comparison between the unknown character and a set of reference patterns. Usually, these reference patterns are all possible characters themselves, the whole alphabet in the case of letter characters. Obviously, N analog measurements are highly redundant, since only K = log(2)N binary decisions are enough to identify one out of N characters. Therefore, we devised K reference patterns accordingly. These patterns, called principal components, are found by digital image processing, but used in an optical analog computer. We will explain the concept of principal components, and we will describe experiments with several optical character recognition systems, based on this concept.

  17. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  18. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition.

    PubMed

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on background subtraction on RGB and depth video sequences of behaviors, extracting and integrating historical images of the behavior outlines, feature extraction and classification. The new method of 3D human behavior recognition has achieved the rapid and efficient recognition of behavior videos. A large number of experiments show that the proposed method has faster speed and higher recognition rate. The recognition method has good robustness for different environmental colors, lightings and other factors. Meanwhile, the feature of mixed texture-edge uniform local binary pattern can be used in most 3D behavior recognition.

  19. The software peculiarities of pattern recognition in track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starkov, N.

    The different kinds of nuclear track recognition algorithms are represented. Several complicated samples of use them in physical experiments are considered. The some processing methods of complicated images are described.

  20. Fuzzy tree automata and syntactic pattern recognition.

    PubMed

    Lee, E T

    1982-04-01

    An approach of representing patterns by trees and processing these trees by fuzzy tree automata is described. Fuzzy tree automata are defined and investigated. The results include that the class of fuzzy root-to-frontier recognizable ¿-trees is closed under intersection, union, and complementation. Thus, the class of fuzzy root-to-frontier recognizable ¿-trees forms a Boolean algebra. Fuzzy tree automata are applied to processing fuzzy tree representation of patterns based on syntactic pattern recognition. The grade of acceptance is defined and investigated. Quantitative measures of ``approximate isosceles triangle,'' ``approximate elongated isosceles triangle,'' ``approximate rectangle,'' and ``approximate cross'' are defined and used in the illustrative examples of this approach. By using these quantitative measures, a house, a house with high roof, and a church are also presented as illustrative examples. In addition, three fuzzy tree automata are constructed which have the capability of processing the fuzzy tree representations of ``fuzzy houses,'' ``houses with high roofs,'' and ``fuzzy churches,'' respectively. The results may have useful applications in pattern recognition, image processing, artificial intelligence, pattern database design and processing, image science, and pictorial information systems.

  1. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  2. The use of global image characteristics for neural network pattern recognitions

    NASA Astrophysics Data System (ADS)

    Kulyas, Maksim O.; Kulyas, Oleg L.; Loshkarev, Aleksey S.

    2017-04-01

    The recognition system is observed, where the information is transferred by images of symbols generated by a television camera. For descriptors of objects the coefficients of two-dimensional Fourier transformation generated in a special way. For solution of the task of classification the one-layer neural network trained on reference images is used. Fast learning of a neural network with a single neuron calculation of coefficients is applied.

  3. Infrared face recognition based on LBP histogram and KW feature selection

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  4. Ultrasonography of ovarian masses using a pattern recognition approach

    PubMed Central

    Jung, Sung Il

    2015-01-01

    As a primary imaging modality, ultrasonography (US) can provide diagnostic information for evaluating ovarian masses. Using a pattern recognition approach through gray-scale transvaginal US, ovarian masses can be diagnosed with high specificity and sensitivity. Doppler US may allow ovarian masses to be diagnosed as benign or malignant with even greater confidence. In order to differentiate benign and malignant ovarian masses, it is necessary to categorize ovarian masses into unilocular cyst, unilocular solid cyst, multilocular cyst, multilocular solid cyst, and solid tumor, and then to detect typical US features that demonstrate malignancy based on pattern recognition approach. PMID:25797108

  5. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    NASA Astrophysics Data System (ADS)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  6. Attentional biases and memory for emotional stimuli in men and male rhesus monkeys.

    PubMed

    Lacreuse, Agnès; Schatz, Kelly; Strazzullo, Sarah; King, Hanna M; Ready, Rebecca

    2013-11-01

    We examined attentional biases for social and non-social emotional stimuli in young adult men and compared the results to those of male rhesus monkeys (Macaca mulatta) previously tested in a similar dot-probe task (King et al. in Psychoneuroendocrinology 37(3):396-409, 2012). Recognition memory for these stimuli was also analyzed in each species, using a recognition memory task in humans and a delayed non-matching-to-sample task in monkeys. We found that both humans and monkeys displayed a similar pattern of attentional biases toward threatening facial expressions of conspecifics. The bias was significant in monkeys and of marginal significance in humans. In addition, humans, but not monkeys, exhibited an attentional bias away from negative non-social images. Attentional biases for social and non-social threat differed significantly, with both species showing a pattern of vigilance toward negative social images and avoidance of negative non-social images. Positive stimuli did not elicit significant attentional biases for either species. In humans, emotional content facilitated the recognition of non-social images, but no effect of emotion was found for the recognition of social images. Recognition accuracy was not affected by emotion in monkeys, but response times were faster for negative relative to positive images. Altogether, these results suggest shared mechanisms of social attention in humans and monkeys, with both species showing a pattern of selective attention toward threatening faces of conspecifics. These data are consistent with the view that selective vigilance to social threat is the result of evolutionary constraints. Yet, selective attention to threat was weaker in humans than in monkeys, suggesting that regulatory mechanisms enable non-anxious humans to reduce sensitivity to social threat in this paradigm, likely through enhanced prefrontal control and reduced amygdala activation. In addition, the findings emphasize important differences in attentional biases to social versus non-social threat in both species. Differences in the impact of emotional stimuli on recognition memory between monkeys and humans will require further study, as methodological differences in the recognition tasks may have affected the results.

  7. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    PubMed Central

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    PubMed

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  9. A novel iris patterns matching algorithm of weighted polar frequency correlation

    NASA Astrophysics Data System (ADS)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  10. Optical Processing of Speckle Images with Bacteriorhodopsin for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Tucker, Deanne (Technical Monitor)

    1994-01-01

    Logarithmic processing of images with multiplicative noise characteristics can be utilized to transform the image into one with an additive noise distribution. This simplifies subsequent image processing steps for applications such as image restoration or correlation for pattern recognition. One particularly common form of multiplicative noise is speckle, for which the logarithmic operation not only produces additive noise, but also makes it of constant variance (signal-independent). We examine the optical transmission properties of some bacteriorhodopsin films here and find them well suited to implement such a pointwise logarithmic transformation optically in a parallel fashion. We present experimental results of the optical conversion of speckle images into transformed images with additive, signal-independent noise statistics using the real-time photochromic properties of bacteriorhodopsin. We provide an example of improved correlation performance in terms of correlation peak signal-to-noise for such a transformed speckle image.

  11. Automated Processing of 2-D Gel Electrophoretograms of Genomic DNA for Hunting Pathogenic DNA Molecular Changes.

    PubMed

    Takahashi; Nakazawa; Watanabe; Konagaya

    1999-01-01

    We have developed the automated processing algorithms for 2-dimensional (2-D) electrophoretograms of genomic DNA based on RLGS (Restriction Landmark Genomic Scanning) method, which scans the restriction enzyme recognition sites as the landmark and maps them onto a 2-D electrophoresis gel. Our powerful processing algorithms realize the automated spot recognition from RLGS electrophoretograms and the automated comparison of a huge number of such images. In the final stage of the automated processing, a master spot pattern, on which all the spots in the RLGS images are mapped at once, can be obtained. The spot pattern variations which seemed to be specific to the pathogenic DNA molecular changes can be easily detected by simply looking over the master spot pattern. When we applied our algorithms to the analysis of 33 RLGS images derived from human colon tissues, we successfully detected several colon tumor specific spot pattern changes.

  12. Application of syntactic methods of pattern recognition for data mining and knowledge discovery in medicine

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2000-04-01

    This paper presents and discusses possibilities of application of selected algorithms belonging to the group of syntactic methods of patten recognition used to analyze and extract features of shapes and to diagnose morphological lesions seen on selected medical images. This method is particularly useful for specialist morphological analysis of shapes of selected organs of abdominal cavity conducted to diagnose disease symptoms occurring in the main pancreatic ducts, upper segments of ureters and renal pelvis. Analysis of the correct morphology of these organs is possible with the application of the sequential and tree method belonging to the group of syntactic methods of pattern recognition. The objective of this analysis is to support early diagnosis of disease lesions, mainly characteristic for carcinoma and pancreatitis, based on examinations of ERCP images and a diagnosis of morphological lesions in ureters as well as renal pelvis based on an analysis of urograms. In the analysis of ERCP images the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis, while in the case of kidney radiogram analysis the aim is to diagnose local irregularities of ureter lumen and to examine the morphology of renal pelvis and renal calyxes. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of description of features of shapes and context-free sequential attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing of width diagrams of the examined structures. Additionally, in order to support the analysis of the correct structure of renal pelvis a method using the tree grammar for syntactic pattern recognition to define its correct morphological shapes has been presented.

  13. Advanced optical correlation and digital methods for pattern matching—50th anniversary of Vander Lugt matched filter

    NASA Astrophysics Data System (ADS)

    Millán, María S.

    2012-10-01

    On the verge of the 50th anniversary of Vander Lugt’s formulation for pattern matching based on matched filtering and optical correlation, we acknowledge the very intense research activity developed in the field of correlation-based pattern recognition during this period of time. The paper reviews some domains that appeared as emerging fields in the last years of the 20th century and have been developed later on in the 21st century. Such is the case of three-dimensional (3D) object recognition, biometric pattern matching, optical security and hybrid optical-digital processors. 3D object recognition is a challenging case of multidimensional image recognition because of its implications in the recognition of real-world objects independent of their perspective. Biometric recognition is essentially pattern recognition for which the personal identification is based on the authentication of a specific physiological characteristic possessed by the subject (e.g. fingerprint, face, iris, retina, and multifactor combinations). Biometric recognition often appears combined with encryption-decryption processes to secure information. The optical implementations of correlation-based pattern recognition processes still rely on the 4f-correlator, the joint transform correlator, or some of their variants. But the many applications developed in the field have been pushing the systems for a continuous improvement of their architectures and algorithms, thus leading towards merged optical-digital solutions.

  14. A robust star identification algorithm with star shortlisting

    NASA Astrophysics Data System (ADS)

    Mehta, Deval Samirbhai; Chen, Shoushun; Low, Kay Soon

    2018-05-01

    A star tracker provides the most accurate attitude solution in terms of arc seconds compared to the other existing attitude sensors. When no prior attitude information is available, it operates in "Lost-In-Space (LIS)" mode. Star pattern recognition, also known as star identification algorithm, forms the most crucial part of a star tracker in the LIS mode. Recognition reliability and speed are the two most important parameters of a star pattern recognition technique. In this paper, a novel star identification algorithm with star ID shortlisting is proposed. Firstly, the star IDs are shortlisted based on worst-case patch mismatch, and later stars are identified in the image by an initial match confirmed with a running sequential angular match technique. The proposed idea is tested on 16,200 simulated star images having magnitude uncertainty, noise stars, positional deviation, and varying size of the field of view. The proposed idea is also benchmarked with the state-of-the-art star pattern recognition techniques. Finally, the real-time performance of the proposed technique is tested on the 3104 real star images captured by a star tracker SST-20S currently mounted on a satellite. The proposed technique can achieve an identification accuracy of 98% and takes only 8.2 ms for identification on real images. Simulation and real-time results depict that the proposed technique is highly robust and achieves a high speed of identification suitable for actual space applications.

  15. Artificial Immune System for Recognizing Patterns

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2005-01-01

    A method of recognizing or classifying patterns is based on an artificial immune system (AIS), which includes an algorithm and a computational model of nonlinear dynamics inspired by the behavior of a biological immune system. The method has been proposed as the theoretical basis of the computational portion of a star-tracking system aboard a spacecraft. In that system, a newly acquired star image would be treated as an antigen that would be matched by an appropriate antibody (an entry in a star catalog). The method would enable rapid convergence, would afford robustness in the face of noise in the star sensors, would enable recognition of star images acquired in any sensor or spacecraft orientation, and would not make an excessive demand on the computational resources of a typical spacecraft. Going beyond the star-tracking application, the AIS-based pattern-recognition method is potentially applicable to pattern- recognition and -classification processes for diverse purposes -- for example, reconnaissance, detecting intruders, and mining data.

  16. Real-time polarization imaging algorithm for camera-based polarization navigation sensors.

    PubMed

    Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli

    2017-04-10

    Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.

  17. Multimodality imaging and state-of-art GPU technology in discriminating benign from malignant breast lesions on real time decision support system

    NASA Astrophysics Data System (ADS)

    Kostopoulos, S.; Sidiropoulos, K.; Glotsos, D.; Dimitropoulos, N.; Kalatzis, I.; Asvestas, P.; Cavouras, D.

    2014-03-01

    The aim of this study was to design a pattern recognition system for assisting the diagnosis of breast lesions, using image information from Ultrasound (US) and Digital Mammography (DM) imaging modalities. State-of-art computer technology was employed based on commercial Graphics Processing Unit (GPU) cards and parallel programming. An experienced radiologist outlined breast lesions on both US and DM images from 59 patients employing a custom designed computer software application. Textural features were extracted from each lesion and were used to design the pattern recognition system. Several classifiers were tested for highest performance in discriminating benign from malignant lesions. Classifiers were also combined into ensemble schemes for further improvement of the system's classification accuracy. Following the pattern recognition system optimization, the final system was designed employing the Probabilistic Neural Network classifier (PNN) on the GPU card (GeForce 580GTX) using CUDA programming framework and C++ programming language. The use of such state-of-art technology renders the system capable of redesigning itself on site once additional verified US and DM data are collected. Mixture of US and DM features optimized performance with over 90% accuracy in correctly classifying the lesions.

  18. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  19. An Evaluation and Comparison of Several Measures of Image Quality for Television Displays

    DTIC Science & Technology

    1979-01-01

    vehicles, buildings, or faces , or they may be artificial much as trn-bar patterns, rectangles, or sine waves. The typical objective image quality assessment...Snyder (1974b) wac able to obtain very good correlations with reaction time and correct responses for a face recognition task. Display quality was varied...recognition versus log JUDA for the target recognition study of Chapter 4, 5) graph of angle oubtended by target at recognitio , versus log JNDA for the

  20. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    NASA Astrophysics Data System (ADS)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  1. Conditional random fields for pattern recognition applied to structured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Skurikhin, Alexei

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  2. Conditional random fields for pattern recognition applied to structured data

    DOE PAGES

    Burr, Tom; Skurikhin, Alexei

    2015-07-14

    In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less

  3. Automatic anatomy recognition on CT images with pathology

    NASA Astrophysics Data System (ADS)

    Huang, Lidong; Udupa, Jayaram K.; Tong, Yubing; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Body-wide anatomy recognition on CT images with pathology becomes crucial for quantifying body-wide disease burden. This, however, is a challenging problem because various diseases result in various abnormalities of objects such as shape and intensity patterns. We previously developed an automatic anatomy recognition (AAR) system [1] whose applicability was demonstrated on near normal diagnostic CT images in different body regions on 35 organs. The aim of this paper is to investigate strategies for adapting the previous AAR system to diagnostic CT images of patients with various pathologies as a first step toward automated body-wide disease quantification. The AAR approach consists of three main steps - model building, object recognition, and object delineation. In this paper, within the broader AAR framework, we describe a new strategy for object recognition to handle abnormal images. In the model building stage an optimal threshold interval is learned from near-normal training images for each object. This threshold is optimally tuned to the pathological manifestation of the object in the test image. Recognition is performed following a hierarchical representation of the objects. Experimental results for the abdominal body region based on 50 near-normal images used for model building and 20 abnormal images used for object recognition show that object localization accuracy within 2 voxels for liver and spleen and 3 voxels for kidney can be achieved with the new strategy.

  4. Enhanced iris recognition method based on multi-unit iris images

    NASA Astrophysics Data System (ADS)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the averaged equal error rate of iris recognition using the proposed method was 4.3006%, which is lower than that of other methods.

  5. Human activities recognition by head movement using partial recurrent neural network

    NASA Astrophysics Data System (ADS)

    Tan, Henry C. C.; Jia, Kui; De Silva, Liyanage C.

    2003-06-01

    Traditionally, human activities recognition has been achieved mainly by the statistical pattern recognition methods or the Hidden Markov Model (HMM). In this paper, we propose a novel use of the connectionist approach for the recognition of ten simple human activities: walking, sitting down, getting up, squatting down and standing up, in both lateral and frontal views, in an office environment. By means of tracking the head movement of the subjects over consecutive frames from a database of different color image sequences, and incorporating the Elman model of the partial recurrent neural network (RNN) that learns the sequential patterns of relative change of the head location in the images, the proposed system is able to robustly classify all the ten activities performed by unseen subjects from both sexes, of different race and physique, with a recognition rate as high as 92.5%. This demonstrates the potential of employing partial RNN to recognize complex activities in the increasingly popular human-activities-based applications.

  6. Gross feature recognition of Anatomical Images based on Atlas grid (GAIA): Incorporating the local discrepancy between an atlas and a target image to capture the features of anatomic brain MRI.

    PubMed

    Qin, Yuan-Yuan; Hsu, Johnny T; Yoshida, Shoko; Faria, Andreia V; Oishi, Kumiko; Unschuld, Paul G; Redgrave, Graham W; Ying, Sarah H; Ross, Christopher A; van Zijl, Peter C M; Hillis, Argye E; Albert, Marilyn S; Lyketsos, Constantine G; Miller, Michael I; Mori, Susumu; Oishi, Kenichi

    2013-01-01

    We aimed to develop a new method to convert T1-weighted brain MRIs to feature vectors, which could be used for content-based image retrieval (CBIR). To overcome the wide range of anatomical variability in clinical cases and the inconsistency of imaging protocols, we introduced the Gross feature recognition of Anatomical Images based on Atlas grid (GAIA), in which the local intensity alteration, caused by pathological (e.g., ischemia) or physiological (development and aging) intensity changes, as well as by atlas-image misregistration, is used to capture the anatomical features of target images. As a proof-of-concept, the GAIA was applied for pattern recognition of the neuroanatomical features of multiple stages of Alzheimer's disease, Huntington's disease, spinocerebellar ataxia type 6, and four subtypes of primary progressive aphasia. For each of these diseases, feature vectors based on a training dataset were applied to a test dataset to evaluate the accuracy of pattern recognition. The feature vectors extracted from the training dataset agreed well with the known pathological hallmarks of the selected neurodegenerative diseases. Overall, discriminant scores of the test images accurately categorized these test images to the correct disease categories. Images without typical disease-related anatomical features were misclassified. The proposed method is a promising method for image feature extraction based on disease-related anatomical features, which should enable users to submit a patient image and search past clinical cases with similar anatomical phenotypes.

  7. Recognition of complex human behaviours using 3D imaging for intelligent surveillance applications

    NASA Astrophysics Data System (ADS)

    Yao, Bo; Lepley, Jason J.; Peall, Robert; Butler, Michael; Hagras, Hani

    2016-10-01

    We introduce a system that exploits 3-D imaging technology as an enabler for the robust recognition of the human form. We combine this with pose and feature recognition capabilities from which we can recognise high-level human behaviours. We propose a hierarchical methodology for the recognition of complex human behaviours, based on the identification of a set of atomic behaviours, individual and sequential poses (e.g. standing, sitting, walking, drinking and eating) that provides a framework from which we adopt time-based machine learning techniques to recognise complex behaviour patterns.

  8. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acciarri, R.; Adams, C.; An, R.

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  9. The Pandora multi-algorithm approach to automated pattern recognition of cosmic-ray muon and neutrino events in the MicroBooNE detector

    DOE PAGES

    Acciarri, R.; Adams, C.; An, R.; ...

    2018-01-29

    The development and operation of Liquid-Argon Time-Projection Chambers for neutrino physics has created a need for new approaches to pattern recognition in order to fully exploit the imaging capabilities offered by this technology. Whereas the human brain can excel at identifying features in the recorded events, it is a significant challenge to develop an automated, algorithmic solution. The Pandora Software Development Kit provides functionality to aid the design and implementation of pattern-recognition algorithms. It promotes the use of a multi-algorithm approach to pattern recognition, in which individual algorithms each address a specific task in a particular topology. Many tens ofmore » algorithms then carefully build up a picture of the event and, together, provide a robust automated pattern-recognition solution. This paper describes details of the chain of over one hundred Pandora algorithms and tools used to reconstruct cosmic-ray muon and neutrino events in the MicroBooNE detector. Metrics that assess the current pattern-recognition performance are presented for simulated MicroBooNE events, using a selection of final-state event topologies.« less

  10. Large-memory real-time multichannel multiplexed pattern recognition

    NASA Technical Reports Server (NTRS)

    Gregory, D. A.; Liu, H. K.

    1984-01-01

    The principle and experimental design of a real-time multichannel multiplexed optical pattern recognition system via use of a 25-focus dichromated gelatin holographic lens (hololens) are described. Each of the 25 foci of the hololens may have a storage and matched filtering capability approaching that of a single-lens correlator. If the space-bandwidth product of an input image is limited, as is true in most practical cases, the 25-focus hololens system has 25 times the capability of a single lens. Experimental results have shown that the interfilter noise is not serious. The system has already demonstrated the storage and recognition of over 70 matched filters - which is a larger capacity than any optical pattern recognition system reported to date.

  11. Finger crease pattern recognition using Legendre moments and principal component analysis

    NASA Astrophysics Data System (ADS)

    Luo, Rongfang; Lin, Tusheng

    2007-03-01

    The finger joint lines defined as finger creases and its distribution can identify a person. In this paper, we propose a new finger crease pattern recognition method based on Legendre moments and principal component analysis (PCA). After obtaining the region of interest (ROI) for each finger image in the pre-processing stage, Legendre moments under Radon transform are applied to construct a moment feature matrix from the ROI, which greatly decreases the dimensionality of ROI and can represent principal components of the finger creases quite well. Then, an approach to finger crease pattern recognition is designed based on Karhunen-Loeve (K-L) transform. The method applies PCA to a moment feature matrix rather than the original image matrix to achieve the feature vector. The proposed method has been tested on a database of 824 images from 103 individuals using the nearest neighbor classifier. The accuracy up to 98.584% has been obtained when using 4 samples per class for training. The experimental results demonstrate that our proposed approach is feasible and effective in biometrics.

  12. Pattern-Recognition Processor Using Holographic Photopolymer

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Cammack, Kevin

    2006-01-01

    proposed joint-transform optical correlator (JTOC) would be capable of operating as a real-time pattern-recognition processor. The key correlation-filter reading/writing medium of this JTOC would be an updateable holographic photopolymer. The high-resolution, high-speed characteristics of this photopolymer would enable pattern-recognition processing to occur at a speed three orders of magnitude greater than that of state-of-the-art digital pattern-recognition processors. There are many potential applications in biometric personal identification (e.g., using images of fingerprints and faces) and nondestructive industrial inspection. In order to appreciate the advantages of the proposed JTOC, it is necessary to understand the principle of operation of a conventional JTOC. In a conventional JTOC (shown in the upper part of the figure), a collimated laser beam passes through two side-by-side spatial light modulators (SLMs). One SLM displays a real-time input image to be recognized. The other SLM displays a reference image from a digital memory. A Fourier-transform lens is placed at its focal distance from the SLM plane, and a charge-coupled device (CCD) image detector is placed at the back focal plane of the lens for use as a square-law recorder. Processing takes place in two stages. In the first stage, the CCD records the interference pattern between the Fourier transforms of the input and reference images, and the pattern is then digitized and saved in a buffer memory. In the second stage, the reference SLM is turned off and the interference pattern is fed back to the input SLM. The interference pattern thus becomes Fourier-transformed, yielding at the CCD an image representing the joint-transform correlation between the input and reference images. This image contains a sharp correlation peak when the input and reference images are matched. The drawbacks of a conventional JTOC are the following: The CCD has low spatial resolution and is not an ideal square-law detector for the purpose of holographic recording of interference fringes. A typical state-of-the-art CCD has a pixel-pitch limited resolution of about 100 lines/mm. In contrast, the holographic photopolymer to be used in the proposed JTOC offers a resolution > 2,000 lines/mm. In addition to being disadvantageous in itself, the low resolution of the CCD causes overlap of a DC term and the desired correlation term in the output image. This overlap severely limits the correlation signal-to-noise ratio. The two-stage nature of the process limits the achievable throughput rate. A further limit is imposed by the low frame rate (typical video rates) of low- and medium-cost commercial CCDs.

  13. CNNs flag recognition preprocessing scheme based on gray scale stretching and local binary pattern

    NASA Astrophysics Data System (ADS)

    Gong, Qian; Qu, Zhiyi; Hao, Kun

    2017-07-01

    Flag is a rather special recognition target in image recognition because of its non-rigid features with the location, scale and rotation characteristics. The location change can be handled well by the depth learning algorithm Convolutional Neural Networks (CNNs), but the scale and rotation changes are quite a challenge for CNNs. Since it has good rotation and gray scale invariance, the local binary pattern (LBP) is combined with grayscale stretching and CNNs to make LBP and grayscale stretching as CNNs pretreatment, which can not only significantly improve the efficiency of flag recognition, but can also evaluate the recognition effect through ROC, accuracy, MSE and quality factor.

  14. Sequential Learning and Recognition of Comprehensive Behavioral Patterns Based on Flow of People

    NASA Astrophysics Data System (ADS)

    Gibo, Tatsuya; Aoki, Shigeki; Miyamoto, Takao; Iwata, Motoi; Shiozaki, Akira

    Recently, surveillance cameras have been set up everywhere, for example, in streets and public places, in order to detect irregular situations. In the existing surveillance systems, as only a handful of surveillance agents watch a large number of images acquired from surveillance cameras, there is a possibility that they may miss important scenes such as accidents or abnormal incidents. Therefore, we propose a method for sequential learning and the recognition of comprehensive behavioral patterns in crowded places. First, we comprehensively extract a flow of people from input images by using optical flow. Second, we extract behavioral patterns on the basis of change-point detection of the flow of people. Finally, in order to recognize an observed behavioral pattern, we draw a comparison between the behavioral pattern and previous behavioral patterns in the database. We verify the effectiveness of our approach by placing a surveillance camera on a campus.

  15. Image Description with Local Patterns: An Application to Face Recognition

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Ahrary, Alireza; Kamata, Sei-Ichiro

    In this paper, we propose a novel approach for presenting the local features of digital image using 1D Local Patterns by Multi-Scans (1DLPMS). We also consider the extentions and simplifications of the proposed approach into facial images analysis. The proposed approach consists of three steps. At the first step, the gray values of pixels in image are represented as a vector giving the local neighborhood intensity distrubutions of the pixels. Then, multi-scans are applied to capture different spatial information on the image with advantage of less computation than other traditional ways, such as Local Binary Patterns (LBP). The second step is encoding the local features based on different encoding rules using 1D local patterns. This transformation is expected to be less sensitive to illumination variations besides preserving the appearance of images embedded in the original gray scale. At the final step, Grouped 1D Local Patterns by Multi-Scans (G1DLPMS) is applied to make the proposed approach computationally simpler and easy to extend. Next, we further formulate boosted algorithm to extract the most discriminant local features. The evaluated results demonstrate that the proposed approach outperforms the conventional approaches in terms of accuracy in applications of face recognition, gender estimation and facial expression.

  16. Detection and recognition of analytes based on their crystallization patterns

    DOEpatents

    Morozov, Victor [Manassas, VA; Bailey, Charles L [Cross Junction, VA; Vsevolodov, Nikolai N [Kensington, MD; Elliott, Adam [Manassas, VA

    2008-05-06

    The invention contemplates a method for recognition of proteins and other biological molecules by imaging morphology, size and distribution of crystalline and amorphous dry residues in droplets (further referred to as "crystallization pattern") containing predetermined amount of certain crystal-forming organic compounds (reporters) to which protein to be analyzed is added. It has been shown that changes in the crystallization patterns of a number of amino-acids can be used as a "signature" of a protein added. It was also found that both the character of changer in the crystallization patter and the fact of such changes can be used as recognition elements in analysis of protein molecules.

  17. An improved architecture for video rate image transformations

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.; Juday, Richard D.

    1989-01-01

    Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA Programmable Remapper can perform a wide variety of geometric transforms at full video rate. An architecture is proposed that extends its abilities and alleviates many of the first version's shortcomings. The need for the improvements are discussed in the context of the initial Programmable Remapper and the benefits and limitations it has delivered. The implementation and capabilities of the proposed architecture are discussed.

  18. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  19. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  20. Implementation theory of distortion-invariant pattern recognition for optical and digital signal processing systems

    NASA Astrophysics Data System (ADS)

    Lhamon, Michael Earl

    A pattern recognition system which uses complex correlation filter banks requires proportionally more computational effort than single-real valued filters. This introduces increased computation burden but also introduces a higher level of parallelism, that common computing platforms fail to identify. As a result, we consider algorithm mapping to both optical and digital processors. For digital implementation, we develop computationally efficient pattern recognition algorithms, referred to as, vector inner product operators that require less computational effort than traditional fast Fourier methods. These algorithms do not need correlation and they map readily onto parallel digital architectures, which imply new architectures for optical processors. These filters exploit circulant-symmetric matrix structures of the training set data representing a variety of distortions. By using the same mathematical basis as with the vector inner product operations, we are able to extend the capabilities of more traditional correlation filtering to what we refer to as "Super Images". These "Super Images" are used to morphologically transform a complicated input scene into a predetermined dot pattern. The orientation of the dot pattern is related to the rotational distortion of the object of interest. The optical implementation of "Super Images" yields feature reduction necessary for using other techniques, such as artificial neural networks. We propose a parallel digital signal processor architecture based on specific pattern recognition algorithms but general enough to be applicable to other similar problems. Such an architecture is classified as a data flow architecture. Instead of mapping an algorithm to an architecture, we propose mapping the DSP architecture to a class of pattern recognition algorithms. Today's optical processing systems have difficulties implementing full complex filter structures. Typically, optical systems (like the 4f correlators) are limited to phase-only implementation with lower detection performance than full complex electronic systems. Our study includes pseudo-random pixel encoding techniques for approximating full complex filtering. Optical filter bank implementation is possible and they have the advantage of time averaging the entire filter bank at real time rates. Time-averaged optical filtering is computational comparable to billions of digital operations-per-second. For this reason, we believe future trends in high speed pattern recognition will involve hybrid architectures of both optical and DSP elements.

  1. Speaker-independent phoneme recognition with a binaural auditory image model

    NASA Astrophysics Data System (ADS)

    Francis, Keith Ivan

    1997-09-01

    This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.

  2. Image databases: Problems and perspectives

    NASA Technical Reports Server (NTRS)

    Gudivada, V. Naidu

    1989-01-01

    With the increasing number of computer graphics, image processing, and pattern recognition applications, economical storage, efficient representation and manipulation, and powerful and flexible query languages for retrieval of image data are of paramount importance. These and related issues pertinent to image data bases are examined.

  3. Effect of cataract surgery and pupil dilation on iris pattern recognition for personal authentication.

    PubMed

    Dhir, L; Habib, N E; Monro, D M; Rakshit, S

    2010-06-01

    The purpose of this study was to investigate the effect of cataract surgery and pupil dilation on iris pattern recognition for personal authentication. Prospective non-comparative cohort study. Images of 15 subjects were captured before (enrolment), and 5, 10, and 15 min after instillation of mydriatics before routine cataract surgery. After cataract surgery, images were captured 2 weeks thereafter. Enrolled and test images (after pupillary dilation and after cataract surgery) were segmented to extract the iris. This was then unwrapped onto a rectangular format for normalization and a novel method using the Discrete Cosine Transform was applied to encode the image into binary bits. The numerical difference between two iris codes (Hamming distance, HD) was calculated. The HD between identification and enrolment codes was used as a score and was compared with a confidence threshold for specific equipment, giving a match or non-match result. The Correct Recognition Rate (CRR) and Equal Error Rates (EERs) were calculated to analyse overall system performance. After cataract surgery, perfect identification and verification was achieved, with zero false acceptance rate, zero false rejection rate, and zero EER. After pupillary dilation, non-elastic deformation occurs and a CRR of 86.67% and EER of 9.33% were obtained. Conventional circle-based localization methods are inadequate. Matching reliability decreases considerably with increase in pupillary dilation. Cataract surgery has no effect on iris pattern recognition, whereas pupil dilation may be used to defeat an iris-based authentication system.

  4. Finger vein recognition based on personalized weight maps.

    PubMed

    Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu

    2013-09-10

    Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition.

  5. Finger Vein Recognition Based on Personalized Weight Maps

    PubMed Central

    Yang, Gongping; Xiao, Rongyang; Yin, Yilong; Yang, Lu

    2013-01-01

    Finger vein recognition is a promising biometric recognition technology, which verifies identities via the vein patterns in the fingers. Binary pattern based methods were thoroughly studied in order to cope with the difficulties of extracting the blood vessel network. However, current binary pattern based finger vein matching methods treat every bit of feature codes derived from different image of various individuals as equally important and assign the same weight value to them. In this paper, we propose a finger vein recognition method based on personalized weight maps (PWMs). The different bits have different weight values according to their stabilities in a certain number of training samples from an individual. Firstly we present the concept of PWM, and then propose the finger vein recognition framework, which mainly consists of preprocessing, feature extraction, and matching. Finally, we design extensive experiments to evaluate the effectiveness of our proposal. Experimental results show that PWM achieves not only better performance, but also high robustness and reliability. In addition, PWM can be used as a general framework for binary pattern based recognition. PMID:24025556

  6. Recognition of pigment network pattern in dermoscopy images based on fuzzy classification of pixels.

    PubMed

    Garcia-Arroyo, Jose Luis; Garcia-Zapirain, Begonya

    2018-01-01

    One of the most relevant dermoscopic patterns is the pigment network. An innovative method of pattern recognition is presented for its detection in dermoscopy images. It consists of two steps. In the first one, by means of a supervised machine learning process and after performing the extraction of different colour and texture features, a fuzzy classification of pixels into the three categories present in the pattern's definition ("net", "hole" and "other") is carried out. This enables the three corresponding fuzzy sets to be created and, as a result, the three probability images that map them out are generated. In the second step, the pigment network pattern is characterised from a parameterisation process -derived from the system specification- and the subsequent extraction of different features calculated from the combinations of image masks extracted from the probability images, corresponding to the alpha-cuts obtained from the fuzzy sets. The method was tested on a database of 875 images -by far the largest used in the state of the art to detect pigment network- extracted from a public Atlas of Dermoscopy, obtaining AUC results of 0.912 and 88%% accuracy, with 90.71%% sensitivity and 83.44%% specificity. The main contribution of this method is the very design of the algorithm, highly innovative, which could also be used to deal with other pattern recognition problems of a similar nature. Other contributions are: 1. The good performance in discriminating between the pattern and the disturbing artefacts -which means that no prior preprocessing is required in this method- and between the pattern and other dermoscopic patterns; 2. It puts forward a new methodological approach for work of this kind, introducing the system specification as a required step prior to algorithm design and development, being this specification the basis for a required parameterisation -in the form of configurable parameters (with their value ranges) and set threshold values- of the algorithm and the subsequent conducting of the experiments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Intraspecific Variation in Learning: Worker Wasps Are Less Able to Learn and Remember Individual Conspecific Faces than Queen Wasps.

    PubMed

    Tibbetts, Elizabeth A; Injaian, Allison; Sheehan, Michael J; Desjardins, Nicole

    2018-05-01

    Research on individual recognition often focuses on species-typical recognition abilities rather than assessing intraspecific variation in recognition. As individual recognition is cognitively costly, the capacity for recognition may vary within species. We test how individual face recognition differs between nest-founding queens (foundresses) and workers in Polistes fuscatus paper wasps. Individual recognition mediates dominance interactions among foundresses. Three previously published experiments have shown that foundresses (1) benefit by advertising their identity with distinctive facial patterns that facilitate recognition, (2) have robust memories of individuals, and (3) rapidly learn to distinguish between face images. Like foundresses, workers have variable facial patterns and are capable of individual recognition. However, worker dominance interactions are muted. Therefore, individual recognition may be less important for workers than for foundresses. We find that (1) workers with unique faces receive amounts of aggression similar to those of workers with common faces, indicating that wasps do not benefit from advertising their individual identity with a unique appearance; (2) workers lack robust memories for individuals, as they cannot remember unique conspecifics after a 6-day separation; and (3) workers learn to distinguish between facial images more slowly than foundresses during training. The recognition differences between foundresses and workers are notable because Polistes lack discrete castes; foundresses and workers are morphologically similar, and workers can take over as queens. Overall, social benefits and receiver capacity for individual recognition are surprisingly plastic.

  8. Multi-Directional Multi-Level Dual-Cross Patterns for Robust Face Recognition.

    PubMed

    Ding, Changxing; Choi, Jonghyun; Tao, Dacheng; Davis, Larry S

    2016-03-01

    To perform unconstrained face recognition robust to variations in illumination, pose and expression, this paper presents a new scheme to extract "Multi-Directional Multi-Level Dual-Cross Patterns" (MDML-DCPs) from face images. Specifically, the MDML-DCPs scheme exploits the first derivative of Gaussian operator to reduce the impact of differences in illumination and then computes the DCP feature at both the holistic and component levels. DCP is a novel face image descriptor inspired by the unique textural structure of human faces. It is computationally efficient and only doubles the cost of computing local binary patterns, yet is extremely robust to pose and expression variations. MDML-DCPs comprehensively yet efficiently encodes the invariant characteristics of a face image from multiple levels into patterns that are highly discriminative of inter-personal differences but robust to intra-personal variations. Experimental results on the FERET, CAS-PERL-R1, FRGC 2.0, and LFW databases indicate that DCP outperforms the state-of-the-art local descriptors (e.g., LBP, LTP, LPQ, POEM, tLBP, and LGXP) for both face identification and face verification tasks. More impressively, the best performance is achieved on the challenging LFW and FRGC 2.0 databases by deploying MDML-DCPs in a simple recognition scheme.

  9. Line-based logo recognition through a web-camera

    NASA Astrophysics Data System (ADS)

    Chen, Xiaolu; Wang, Yangsheng; Feng, Xuetao

    2007-11-01

    Logo recognition has gained much development in the document retrieval and shape analysis domain. As human computer interaction becomes more and more popular, the logo recognition through a web-camera is a promising technology in view of application. But for practical application, the study of logo recognition in real scene is much more difficult than the work in clear scene. To cope with the need, we make some improvements on conventional method. First, moment information is used to calculate the test image's orientation angle, which is used to normalize the test image. Second, the main structure of the test image, which is represented by lines patterns, is acquired and modified Hausdorff distance is employed to match the image and each of the existing templates. The proposed method, which is invariant to scale and rotation, gives good result and can work at real-time. The main contribution of this paper is that some improvements are introduced into the exiting recognition framework which performs much better than the original one. Besides, we have built a highly successful logo recognition system using our improved method.

  10. Improving the recognition of fingerprint biometric system using enhanced image fusion

    NASA Astrophysics Data System (ADS)

    Alsharif, Salim; El-Saba, Aed; Stripathi, Reshma

    2010-04-01

    Fingerprints recognition systems have been widely used by financial institutions, law enforcement, border control, visa issuing, just to mention few. Biometric identifiers can be counterfeited, but considered more reliable and secure compared to traditional ID cards or personal passwords methods. Fingerprint pattern fusion improves the performance of a fingerprint recognition system in terms of accuracy and security. This paper presents digital enhancement and fusion approaches that improve the biometric of the fingerprint recognition system. It is a two-step approach. In the first step raw fingerprint images are enhanced using high-frequency-emphasis filtering (HFEF). The second step is a simple linear fusion process between the raw images and the HFEF ones. It is shown that the proposed approach increases the verification and identification of the fingerprint biometric recognition system, where any improvement is justified using the correlation performance metrics of the matching algorithm.

  11. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  12. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  13. Automated designation of tie-points for image-to-image coregistration.

    Treesearch

    R.E. Kennedy; W.B. Cohen

    2003-01-01

    Image-to-image registration requires identification of common points in both images (image tie-points: ITPs). Here we describe software implementing an automated, area-based technique for identifying ITPs. The ITP software was designed to follow two strategies: ( I ) capitalize on human knowledge and pattern recognition strengths, and (2) favour robustness in many...

  14. Hotspot detection using image pattern recognition based on higher-order local auto-correlation

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki

    2011-04-01

    Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.

  15. Biometric identification

    NASA Astrophysics Data System (ADS)

    Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.

    2018-05-01

    Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.

  16. Background feature descriptor for offline handwritten numeral recognition

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Wang, Hao; Tian, Tian; Jie, Feiran; Lei, Bo

    2011-11-01

    This paper puts forward an offline handwritten numeral recognition method based on background structural descriptor (sixteen-value numerical background expression). Through encoding the background pixels in the image according to a certain rule, 16 different eigenvalues were generated, which reflected the background condition of every digit, then reflected the structural features of the digits. Through pattern language description of images by these features, automatic segmentation of overlapping digits and numeral recognition can be realized. This method is characterized by great deformation resistant ability, high recognition speed and easy realization. Finally, the experimental results and conclusions are presented. The experimental results of recognizing datasets from various practical application fields reflect that with this method, a good recognition effect can be achieved.

  17. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  18. Cross-modal face recognition using multi-matcher face scores

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  19. Local intensity area descriptor for facial recognition in ideal and noise conditions

    NASA Astrophysics Data System (ADS)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  20. Wearable Device-Based Gait Recognition Using Angle Embedded Gait Dynamic Images and a Convolutional Neural Network.

    PubMed

    Zhao, Yongjia; Zhou, Suiping

    2017-02-28

    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN's input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns.

  1. Wearable Device-Based Gait Recognition Using Angle Embedded Gait Dynamic Images and a Convolutional Neural Network

    PubMed Central

    Zhao, Yongjia; Zhou, Suiping

    2017-01-01

    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns. PMID:28264503

  2. Chinese character recognition based on Gabor feature extraction and CNN

    NASA Astrophysics Data System (ADS)

    Xiong, Yudian; Lu, Tongwei; Jiang, Yongyuan

    2018-03-01

    As an important application in the field of text line recognition and office automation, Chinese character recognition has become an important subject of pattern recognition. However, due to the large number of Chinese characters and the complexity of its structure, there is a great difficulty in the Chinese character recognition. In order to solve this problem, this paper proposes a method of printed Chinese character recognition based on Gabor feature extraction and Convolution Neural Network(CNN). The main steps are preprocessing, feature extraction, training classification. First, the gray-scale Chinese character image is binarized and normalized to reduce the redundancy of the image data. Second, each image is convoluted with Gabor filter with different orientations, and the feature map of the eight orientations of Chinese characters is extracted. Third, the feature map through Gabor filters and the original image are convoluted with learning kernels, and the results of the convolution is the input of pooling layer. Finally, the feature vector is used to classify and recognition. In addition, the generalization capacity of the network is improved by Dropout technology. The experimental results show that this method can effectively extract the characteristics of Chinese characters and recognize Chinese characters.

  3. Development of Collaborative Research Initiatives to Advance the Aerospace Sciences-via the Communications, Electronics, Information Systems Focus Group

    NASA Technical Reports Server (NTRS)

    Knasel, T. Michael

    1996-01-01

    The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.

  4. Neural network for intelligent query of an FBI forensic database

    NASA Astrophysics Data System (ADS)

    Uvanni, Lee A.; Rainey, Timothy G.; Balasubramanian, Uma; Brettle, Dean W.; Weingard, Fred; Sibert, Robert W.; Birnbaum, Eric

    1997-02-01

    Examiner is an automated fired cartridge case identification system utilizing a dual-use neural network pattern recognition technology, called the statistical-multiple object detection and location system (S-MODALS) developed by Booz(DOT)Allen & Hamilton, Inc. in conjunction with Rome Laboratory. S-MODALS was originally designed for automatic target recognition (ATR) of tactical and strategic military targets using multisensor fusion [electro-optical (EO), infrared (IR), and synthetic aperture radar (SAR)] sensors. Since S-MODALS is a learning system readily adaptable to problem domains other than automatic target recognition, the pattern matching problem of microscopic marks for firearms evidence was analyzed using S-MODALS. The physics; phenomenology; discrimination and search strategies; robustness requirements; error level and confidence level propagation that apply to the pattern matching problem of military targets were found to be applicable to the ballistic domain as well. The Examiner system uses S-MODALS to rank a set of queried cartridge case images from the most similar to the least similar image in reference to an investigative fired cartridge case image. The paper presents three independent tests and evaluation studies of the Examiner system utilizing the S-MODALS technology for the Federal Bureau of Investigation.

  5. A new pattern associative memory model for image recognition based on Hebb rules and dot product

    NASA Astrophysics Data System (ADS)

    Gao, Mingyue; Deng, Limiao; Wang, Yanjiang

    2018-04-01

    A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.

  6. Apparatus for detecting and recognizing analytes based on their crystallization patterns

    DOEpatents

    Morozov, Victor; Bailey, Charles L.; Vsevolodov, Nikolai N.; Elliott, Adam

    2010-12-14

    The invention contemplates apparatuses for recognition of proteins and other biological molecules by imaging morphology, size and distribution of crystalline and amorphous dry residues in droplets (further referred to as "crystallization patterns") containing predetermined amount of certain crystal-forming organic compounds (reporters) to which protein to be analyzed is added. Changes in the crystallization patterns of a number of amino-acids can be used as a "signature" of a protein added. Also, changes in the crystallization patterns, as well as the character of such changes, can be used as recognition elements in analysis of protein molecules.

  7. Introduction to computer image processing

    NASA Technical Reports Server (NTRS)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  8. Associative Pattern Recognition In Analog VLSI Circuits

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul

    1995-01-01

    Winner-take-all circuit selects best-match stored pattern. Prototype cascadable very-large-scale integrated (VLSI) circuit chips built and tested to demonstrate concept of electronic associative pattern recognition. Based on low-power, sub-threshold analog complementary oxide/semiconductor (CMOS) VLSI circuitry, each chip can store 128 sets (vectors) of 16 analog values (vector components), vectors representing known patterns as diverse as spectra, histograms, graphs, or brightnesses of pixels in images. Chips exploit parallel nature of vector quantization architecture to implement highly parallel processing in relatively simple computational cells. Through collective action, cells classify input pattern in fraction of microsecond while consuming power of few microwatts.

  9. Remote Video Monitor of Vehicles in Cooperative Information Platform

    NASA Astrophysics Data System (ADS)

    Qin, Guofeng; Wang, Xiaoguo; Wang, Li; Li, Yang; Li, Qiyan

    Detection of vehicles plays an important role in the area of the modern intelligent traffic management. And the pattern recognition is a hot issue in the area of computer vision. An auto- recognition system in cooperative information platform is studied. In the cooperative platform, 3G wireless network, including GPS, GPRS (CDMA), Internet (Intranet), remote video monitor and M-DMB networks are integrated. The remote video information can be taken from the terminals and sent to the cooperative platform, then detected by the auto-recognition system. The images are pretreated and segmented, including feature extraction, template matching and pattern recognition. The system identifies different models and gets vehicular traffic statistics. Finally, the implementation of the system is introduced.

  10. Reading recognition of pointer meter based on pattern recognition and dynamic three-points on a line

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqiang; Ding, Mingli; Fu, Wuyifang; Li, Yongqiang

    2017-03-01

    Pointer meters are frequently applied to industrial production for they are directly readable. They should be calibrated regularly to ensure the precision of the readings. Currently the method of manual calibration is most frequently adopted to accomplish the verification of the pointer meter, and professional skills and subjective judgment may lead to big measurement errors and poor reliability and low efficiency, etc. In the past decades, with the development of computer technology, the skills of machine vision and digital image processing have been applied to recognize the reading of the dial instrument. In terms of the existing recognition methods, all the parameters of dial instruments are supposed to be the same, which is not the case in practice. In this work, recognition of pointer meter reading is regarded as an issue of pattern recognition. We obtain the features of a small area around the detected point, make those features as a pattern, divide those certified images based on Gradient Pyramid Algorithm, train a classifier with the support vector machine (SVM) and complete the pattern matching of the divided mages. Then we get the reading of the pointer meter precisely under the theory of dynamic three points make a line (DTPML), which eliminates the error caused by tiny differences of the panels. Eventually, the result of the experiment proves that the proposed method in this work is superior to state-of-the-art works.

  11. Neural network classification technique and machine vision for bread crumb grain evaluation

    NASA Astrophysics Data System (ADS)

    Zayas, Inna Y.; Chung, O. K.; Caley, M.

    1995-10-01

    Bread crumb grain was studied to develop a model for pattern recognition of bread baked at Hard Winter Wheat Quality Laboratory (HWWQL), Grain Marketing and Production Research Center (GMPRC). Images of bread slices were acquired with a scanner in a 512 multiplied by 512 format. Subimages in the central part of the slices were evaluated by several features such as mean, determinant, eigen values, shape of a slice and other crumb features. Derived features were used to describe slices and loaves. Neural network programs of MATLAB package were used for data analysis. Learning vector quantization method and multivariate discriminant analysis were applied to bread slices from what of different sources. A training and test sets of different bread crumb texture classes were obtained. The ranking of subimages was well correlated with visual judgement. The performance of different models on slice recognition rate was studied to choose the best model. The recognition of classes created according to human judgement with image features was low. Recognition of arbitrarily created classes, according to porosity patterns, with several feature patterns was approximately 90%. Correlation coefficient was approximately 0.7 between slice shape features and loaf volume.

  12. Artificial neural network detects human uncertainty

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander E.; Frolov, Nikita S.; Maksimenko, Vladimir A.; Makarov, Vladimir V.; Koronovskii, Alexey A.; Garcia-Prieto, Juan; Antón-Toro, Luis Fernando; Maestú, Fernando; Pisarchik, Alexander N.

    2018-03-01

    Artificial neural networks (ANNs) are known to be a powerful tool for data analysis. They are used in social science, robotics, and neurophysiology for solving tasks of classification, forecasting, pattern recognition, etc. In neuroscience, ANNs allow the recognition of specific forms of brain activity from multichannel EEG or MEG data. This makes the ANN an efficient computational core for brain-machine systems. However, despite significant achievements of artificial intelligence in recognition and classification of well-reproducible patterns of neural activity, the use of ANNs for recognition and classification of patterns in neural networks still requires additional attention, especially in ambiguous situations. According to this, in this research, we demonstrate the efficiency of application of the ANN for classification of human MEG trials corresponding to the perception of bistable visual stimuli with different degrees of ambiguity. We show that along with classification of brain states associated with multistable image interpretations, in the case of significant ambiguity, the ANN can detect an uncertain state when the observer doubts about the image interpretation. With the obtained results, we describe the possible application of ANNs for detection of bistable brain activity associated with difficulties in the decision-making process.

  13. Robust recognition of degraded machine-printed characters using complementary similarity measure and error-correction learning

    NASA Astrophysics Data System (ADS)

    Hagita, Norihiro; Sawaki, Minako

    1995-03-01

    Most conventional methods in character recognition extract geometrical features such as stroke direction, connectivity of strokes, etc., and compare them with reference patterns in a stored dictionary. Unfortunately, geometrical features are easily degraded by blurs, stains and the graphical background designs used in Japanese newspaper headlines. This noise must be removed before recognition commences, but no preprocessing method is completely accurate. This paper proposes a method for recognizing degraded characters and characters printed on graphical background designs. This method is based on the binary image feature method and uses binary images as features. A new similarity measure, called the complementary similarity measure, is used as a discriminant function. It compares the similarity and dissimilarity of binary patterns with reference dictionary patterns. Experiments are conducted using the standard character database ETL-2 which consists of machine-printed Kanji, Hiragana, Katakana, alphanumeric, an special characters. The results show that this method is much more robust against noise than the conventional geometrical feature method. It also achieves high recognition rates of over 92% for characters with textured foregrounds, over 98% for characters with textured backgrounds, over 98% for outline fonts, and over 99% for reverse contrast characters.

  14. A self-organized learning strategy for object recognition by an embedded line of attraction

    NASA Astrophysics Data System (ADS)

    Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.

    2012-04-01

    For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.

  15. Polarimetric Imaging System for Automatic Target Detection and Recognition

    DTIC Science & Technology

    2000-03-01

    technique shown in Figure 4(b) can also be used to integrate polarizer arrays with other types of imaging sensors, such as LWIR cameras and uncooled...vertical stripe pattern in this φ image is caused by nonuniformities in the particular polarizer array used. 2. CIRCULAR POLARIZATION IMAGING USING

  16. Tumor recognition in wireless capsule endoscopy images using textural features and SVM-based feature selection.

    PubMed

    Li, Baopu; Meng, Max Q-H

    2012-05-01

    Tumor in digestive tract is a common disease and wireless capsule endoscopy (WCE) is a relatively new technology to examine diseases for digestive tract especially for small intestine. This paper addresses the problem of automatic recognition of tumor for WCE images. Candidate color texture feature that integrates uniform local binary pattern and wavelet is proposed to characterize WCE images. The proposed features are invariant to illumination change and describe multiresolution characteristics of WCE images. Two feature selection approaches based on support vector machine, sequential forward floating selection and recursive feature elimination, are further employed to refine the proposed features for improving the detection accuracy. Extensive experiments validate that the proposed computer-aided diagnosis system achieves a promising tumor recognition accuracy of 92.4% in WCE images on our collected data.

  17. Personal authentication through dorsal hand vein patterns

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Bin; Hao, Shu-Sheng; Lee, Jen-Chun

    2011-08-01

    Biometric identification is an emerging technology that can solve security problems in our networked society. A reliable and robust personal verification approach using dorsal hand vein patterns is proposed in this paper. The characteristic of the approach needs less computational and memory requirements and has a higher recognition accuracy. In our work, the near-infrared charge-coupled device (CCD) camera is adopted as an input device for capturing dorsal hand vein images, it has the advantages of the low-cost and noncontact imaging. In the proposed approach, two finger-peaks are automatically selected as the datum points to define the region of interest (ROI) in the dorsal hand vein images. The modified two-directional two-dimensional principal component analysis, which performs an alternate two-dimensional PCA (2DPCA) in the column direction of images in the 2DPCA subspace, is proposed to exploit the correlation of vein features inside the ROI between images. The major advantage of the proposed method is that it requires fewer coefficients for efficient dorsal hand vein image representation and recognition. The experimental results on our large dorsal hand vein database show that the presented schema achieves promising performance (false reject rate: 0.97% and false acceptance rate: 0.05%) and is feasible for dorsal hand vein recognition.

  18. Image registration under translation and rotation in two-dimensional planes using Fourier slice theorem.

    PubMed

    Pohit, M; Sharma, J

    2015-05-10

    Image recognition in the presence of both rotation and translation is a longstanding problem in correlation pattern recognition. Use of log polar transform gives a solution to this problem, but at a cost of losing the vital phase information from the image. The main objective of this paper is to develop an algorithm based on Fourier slice theorem for measuring the simultaneous rotation and translation of an object in a 2D plane. The algorithm is applicable for any arbitrary object shift for full 180° rotation.

  19. Ball-scale based hierarchical multi-object recognition in 3D medical images

    NASA Astrophysics Data System (ADS)

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  20. Dual Use of Image Based Tracking Techniques: Laser Eye Surgery and Low Vision Prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.; Barton, R. Shane

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  1. Dual use of image based tracking techniques: Laser eye surgery and low vision prosthesis

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1994-01-01

    With a concentration on Fourier optics pattern recognition, we have developed several methods of tracking objects in dynamic imagery to automate certain space applications such as orbital rendezvous and spacecraft capture, or planetary landing. We are developing two of these techniques for Earth applications in real-time medical image processing. The first is warping of a video image, developed to evoke shift invariance to scale and rotation in correlation pattern recognition. The technology is being applied to compensation for certain field defects in low vision humans. The second is using the optical joint Fourier transform to track the translation of unmodeled scenes. Developed as an image fixation tool to assist in calculating shape from motion, it is being applied to tracking motions of the eyeball quickly enough to keep a laser photocoagulation spot fixed on the retina, thus avoiding collateral damage.

  2. 3D palmprint data fast acquisition and recognition

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxu; Huang, Shujun; Gao, Nan; Zhang, Zonghua

    2014-11-01

    This paper presents a fast 3D (Three-Dimension) palmprint capturing system and develops an efficient 3D palmprint feature extraction and recognition method. In order to fast acquire accurate 3D shape and texture of palmprint, a DLP projector triggers a CCD camera to realize synchronization. By generating and projecting green fringe pattern images onto the measured palm surface, 3D palmprint data are calculated from the fringe pattern images. The periodic feature vector can be derived from the calculated 3D palmprint data, so undistorted 3D biometrics is obtained. Using the obtained 3D palmprint data, feature matching test have been carried out by Gabor filter, competition rules and the mean curvature. Experimental results on capturing 3D palmprint show that the proposed acquisition method can fast get 3D shape information of palmprint. Some initial experiments on recognition show the proposed method is efficient by using 3D palmprint data.

  3. Evidence for Holistic Representations of Ignored Images and Analytic Representations of Attended Images

    ERIC Educational Resources Information Center

    Thoma, Volker; Hummel, John E.; Davidoff, Jules

    2004-01-01

    According to the hybrid theory of object recognition (J. E. Hummel, 2001), ignored object images are represented holistically, and attended images are represented both holistically and analytically. This account correctly predicts patterns of visual priming as a function of translation, scale (B. J. Stankiewicz & J. E. Hummel, 2002), and…

  4. Image processing and pattern recognition with CVIPtools MATLAB toolbox: automatic creation of masks for veterinary thermographic images

    NASA Astrophysics Data System (ADS)

    Mishra, Deependra K.; Umbaugh, Scott E.; Lama, Norsang; Dahal, Rohini; Marino, Dominic J.; Sackman, Joseph

    2016-09-01

    CVIPtools is a software package for the exploration of computer vision and image processing developed in the Computer Vision and Image Processing Laboratory at Southern Illinois University Edwardsville. CVIPtools is available in three variants - a) CVIPtools Graphical User Interface, b) CVIPtools C library and c) CVIPtools MATLAB toolbox, which makes it accessible to a variety of different users. It offers students, faculty, researchers and any user a free and easy way to explore computer vision and image processing techniques. Many functions have been implemented and are updated on a regular basis, the library has reached a level of sophistication that makes it suitable for both educational and research purposes. In this paper, the detail list of the functions available in the CVIPtools MATLAB toolbox are presented and how these functions can be used in image analysis and computer vision applications. The CVIPtools MATLAB toolbox allows the user to gain practical experience to better understand underlying theoretical problems in image processing and pattern recognition. As an example application, the algorithm for the automatic creation of masks for veterinary thermographic images is presented.

  5. Pattern Recognition of the Multiple Sclerosis Syndrome

    PubMed Central

    Stewart, Renee; Healey, Kathleen M.

    2017-01-01

    During recent decades, the autoimmune disease neuromyelitis optica spectrum disorder (NMOSD), once broadly classified under the umbrella of multiple sclerosis (MS), has been extended to include autoimmune inflammatory conditions of the central nervous system (CNS), which are now diagnosable with serum serological tests. These antibody-mediated inflammatory diseases of the CNS share a clinical presentation to MS. A number of practical learning points emerge in this review, which is geared toward the pattern recognition of optic neuritis, transverse myelitis, brainstem/cerebellar and hemispheric tumefactive demyelinating lesion (TDL)-associated MS, aquaporin-4-antibody and myelin oligodendrocyte glycoprotein (MOG)-antibody NMOSD, overlap syndrome, and some yet-to-be-defined/classified demyelinating disease, all unspecifically labeled under MS syndrome. The goal of this review is to increase clinicians’ awareness of the clinical nuances of the autoimmune conditions for MS and NMSOD, and to highlight highly suggestive patterns of clinical, paraclinical or imaging presentations in order to improve differentiation. With overlay in clinical manifestations between MS and NMOSD, magnetic resonance imaging (MRI) of the brain, orbits and spinal cord, serology, and most importantly, high index of suspicion based on pattern recognition, will help lead to the final diagnosis. PMID:29064441

  6. Employing wavelet-based texture features in ammunition classification

    NASA Astrophysics Data System (ADS)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  7. Techniques for generation of control and guidance signals derived from optical fields, part 2

    NASA Technical Reports Server (NTRS)

    Hemami, H.; Mcghee, R. B.; Gardner, S. R.

    1971-01-01

    The development is reported of a high resolution technique for the detection and identification of landmarks from spacecraft optical fields. By making use of nonlinear regression analysis, a method is presented whereby a sequence of synthetic images produced by a digital computer can be automatically adjusted to provide a least squares approximation to a real image. The convergence of the method is demonstrated by means of a computer simulation for both elliptical and rectangular patterns. Statistical simulation studies with elliptical and rectangular patterns show that the computational techniques developed are able to at least match human pattern recognition capabilities, even in the presence of large amounts of noise. Unlike most pattern recognition techniques, this ability is unaffected by arbitrary pattern rotation, translation, and scale change. Further development of the basic approach may eventually allow a spacecraft or robot vehicle to be provided with an ability to very accurately determine its spatial relationship to arbitrary known objects within its optical field of view.

  8. Log-Gabor Weber descriptor for face recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  9. Low-resolution expression recognition based on central oblique average CS-LBP with adaptive threshold

    NASA Astrophysics Data System (ADS)

    Han, Sheng; Xi, Shi-qiong; Geng, Wei-dong

    2017-11-01

    In order to solve the problem of low recognition rate of traditional feature extraction operators under low-resolution images, a novel algorithm of expression recognition is proposed, named central oblique average center-symmetric local binary pattern (CS-LBP) with adaptive threshold (ATCS-LBP). Firstly, the features of face images can be extracted by the proposed operator after pretreatment. Secondly, the obtained feature image is divided into blocks. Thirdly, the histogram of each block is computed independently and all histograms can be connected serially to create a final feature vector. Finally, expression classification is achieved by using support vector machine (SVM) classifier. Experimental results on Japanese female facial expression (JAFFE) database show that the proposed algorithm can achieve a recognition rate of 81.9% when the resolution is as low as 16×16, which is much better than that of the traditional feature extraction operators.

  10. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    DTIC Science & Technology

    2008-01-01

    Parallel architectures and algorithms for image understanding. Boston: Academic Press, 1991. [99] A. Bruhn, T. Jakob, M. Fischer, T. Kohlberger , J...Symposium on Pattern Recognition, vol. 2449(pp. 290-297, 2002. [100] A. Bruhn, T. Jakob, M. Fischer, T. Kohlberger , J. Weickert, U. Bruning, and C

  11. Efficient local representations for three-dimensional palmprint recognition

    NASA Astrophysics Data System (ADS)

    Yang, Bing; Wang, Xiaohua; Yao, Jinliang; Yang, Xin; Zhu, Wenhua

    2013-10-01

    Palmprints have been broadly used for personal authentication because they are highly accurate and incur low cost. Most previous works have focused on two-dimensional (2-D) palmprint recognition in the past decade. Unfortunately, 2-D palmprint recognition systems lose the shape information when capturing palmprint images. Moreover, such 2-D palmprint images can be easily forged or affected by noise. Hence, three-dimensional (3-D) palmprint recognition has been regarded as a promising way to further improve the performance of palmprint recognition systems. We have developed a simple, but efficient method for 3-D palmprint recognition by using local features. We first utilize shape index representation to describe the geometry of local regions in 3-D palmprint data. Then, we extract local binary pattern and Gabor wavelet features from the shape index image. The two types of complementary features are finally fused at a score level for further improvements. The experimental results on the Hong Kong Polytechnic 3-D palmprint database, which contains 8000 samples from 400 palms, illustrate the effectiveness of the proposed method.

  12. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    PubMed

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-03-19

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  13. Wavelet-Based Signal and Image Processing for Target Recognition

    NASA Astrophysics Data System (ADS)

    Sherlock, Barry G.

    2002-11-01

    The PI visited NSWC Dahlgren, VA, for six weeks in May-June 2002 and collaborated with scientists in the G33 TEAMS facility, and with Marilyn Rudzinsky of T44 Technology and Photonic Systems Branch. During this visit the PI also presented six educational seminars to NSWC scientists on various aspects of signal processing. Several items from the grant proposal were completed, including (1) wavelet-based algorithms for interpolation of 1-d signals and 2-d images; (2) Discrete Wavelet Transform domain based algorithms for filtering of image data; (3) wavelet-based smoothing of image sequence data originally obtained for the CRITTIR (Clutter Rejection Involving Temporal Techniques in the Infra-Red) project. The PI visited the University of Stellenbosch, South Africa to collaborate with colleagues Prof. B.M. Herbst and Prof. J. du Preez on the use of wavelet image processing in conjunction with pattern recognition techniques. The University of Stellenbosch has offered the PI partial funding to support a sabbatical visit in Fall 2003, the primary purpose of which is to enable the PI to develop and enhance his expertise in Pattern Recognition. During the first year, the grant supported publication of 3 referred papers, presentation of 9 seminars and an intensive two-day course on wavelet theory. The grant supported the work of two students who functioned as research assistants.

  14. Application of morphological associative memories and Fourier descriptors for classification of noisy subsurface signatures

    NASA Astrophysics Data System (ADS)

    Ortiz, Jorge L.; Parsiani, Hamed; Tolstoy, Leonid

    2004-02-01

    This paper presents a method for recognition of Noisy Subsurface Images using Morphological Associative Memories (MAM). MAM are type of associative memories that use a new kind of neural networks based in the algebra system known as semi-ring. The operations performed in this algebraic system are highly nonlinear providing additional strength when compared to other transformations. Morphological associative memories are a new kind of neural networks that provide a robust performance with noisy inputs. Two representations of morphological associative memories are used called M and W matrices. M associative memory provides a robust association with input patterns corrupted by dilative random noise, while the W associative matrix performs a robust recognition in patterns corrupted with erosive random noise. The robust performance of MAM is used in combination of the Fourier descriptors for the recognition of underground objects in Ground Penetrating Radar (GPR) images. Multiple 2-D GPR images of a site are made available by NASA-SSC center. The buried objects in these images appear in the form of hyperbolas which are the results of radar backscatter from the artifacts or objects. The Fourier descriptors of the prototype hyperbola-like and shapes from non-hyperbola shapes in the sub-surface images are used to make these shapes scale-, shift-, and rotation-invariant. Typical hyperbola-like and non-hyperbola shapes are used to calculate the morphological associative memories. The trained MAMs are used to process other noisy images to detect the presence of these underground objects. The outputs from the MAM using the noisy patterns may be equal to the training prototypes, providing a positive identification of the artifacts. The results are images with recognized hyperbolas which indicate the presence of buried artifacts. A model using MATLAB has been developed and results are presented.

  15. Toward More Accurate Iris Recognition Using Cross-Spectral Matching.

    PubMed

    Nalla, Pattabhi Ramaiah; Kumar, Ajay

    2017-01-01

    Iris recognition systems are increasingly deployed for large-scale applications such as national ID programs, which continue to acquire millions of iris images to establish identity among billions. However, with the availability of variety of iris sensors that are deployed for the iris imaging under different illumination/environment, significant performance degradation is expected while matching such iris images acquired under two different domains (either sensor-specific or wavelength-specific). This paper develops a domain adaptation framework to address this problem and introduces a new algorithm using Markov random fields model to significantly improve cross-domain iris recognition. The proposed domain adaptation framework based on the naive Bayes nearest neighbor classification uses a real-valued feature representation, which is capable of learning domain knowledge. Our approach to estimate corresponding visible iris patterns from the synthesis of iris patches in the near infrared iris images achieves outperforming results for the cross-spectral iris recognition. In this paper, a new class of bi-spectral iris recognition system that can simultaneously acquire visible and near infra-red images with pixel-to-pixel correspondences is proposed and evaluated. This paper presents experimental results from three publicly available databases; PolyU cross-spectral iris image database, IIITD CLI and UND database, and achieve outperforming results for the cross-sensor and cross-spectral iris matching.

  16. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  17. Proceedings of the 1987 IEEE international conference on systems, man, and cybernetics. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-01-01

    This book contains the proceedings of the IEE international conference on systems Man, and cybernetics. Topics include the following: robotics; knowledge base simulation; software systems, image and pattern recognition; neural networks; and image processing.

  18. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  19. Sign Language Recognition System using Neural Network for Digital Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Vargas, Lorena P.; Barba, Leiner; Torres, C. O.; Mattos, L.

    2011-01-01

    This work presents an image pattern recognition system using neural network for the identification of sign language to deaf people. The system has several stored image that show the specific symbol in this kind of language, which is employed to teach a multilayer neural network using a back propagation algorithm. Initially, the images are processed to adapt them and to improve the performance of discriminating of the network, including in this process of filtering, reduction and elimination noise algorithms as well as edge detection. The system is evaluated using the signs without including movement in their representation.

  20. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  1. Method and apparatus for optical encoding with compressible imaging

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    2006-01-01

    The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.

  2. Gesture recognition by instantaneous surface EMG images.

    PubMed

    Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun

    2016-11-15

    Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses.

  3. Correlation pattern recognition: optimal parameters for quality standards control of chocolate marshmallow candy

    NASA Astrophysics Data System (ADS)

    Flores, Jorge L.; García-Torales, G.; Ponce Ávila, Cristina

    2006-08-01

    This paper describes an in situ image recognition system designed to inspect the quality standards of the chocolate pops during their production. The essence of the recognition system is the localization of the events (i.e., defects) in the input images that affect the quality standards of pops. To this end, processing modules, based on correlation filter, and segmentation of images are employed with the objective of measuring the quality standards. Therefore, we designed the correlation filter and defined a set of features from the correlation plane. The desired values for these parameters are obtained by exploiting information about objects to be rejected in order to find the optimal discrimination capability of the system. Regarding this set of features, the pop can be correctly classified. The efficacy of the system has been tested thoroughly under laboratory conditions using at least 50 images, containing 3 different types of possible defects.

  4. On the role of spatial phase and phase correlation in vision, illusion, and cognition

    PubMed Central

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190

  5. On the role of spatial phase and phase correlation in vision, illusion, and cognition.

    PubMed

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."

  6. Short-Term Global Horizontal Irradiance Forecasting Based on Sky Imaging and Pattern Recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Feng, Cong; Cui, Mingjian

    Accurate short-term forecasting is crucial for solar integration in the power grid. In this paper, a classification forecasting framework based on pattern recognition is developed for 1-hour-ahead global horizontal irradiance (GHI) forecasting. Three sets of models in the forecasting framework are trained by the data partitioned from the preprocessing analysis. The first two sets of models forecast GHI for the first four daylight hours of each day. Then the GHI values in the remaining hours are forecasted by an optimal machine learning model determined based on a weather pattern classification model in the third model set. The weather pattern ismore » determined by a support vector machine (SVM) classifier. The developed framework is validated by the GHI and sky imaging data from the National Renewable Energy Laboratory (NREL). Results show that the developed short-term forecasting framework outperforms the persistence benchmark by 16% in terms of the normalized mean absolute error and 25% in terms of the normalized root mean square error.« less

  7. Classifier dependent feature preprocessing methods

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M., II; Peterson, Gilbert L.

    2008-04-01

    In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.

  8. Iris Matching Based on Personalized Weight Map.

    PubMed

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu

    2011-09-01

    Iris recognition typically involves three steps, namely, iris image preprocessing, feature extraction, and feature matching. The first two steps of iris recognition have been well studied, but the last step is less addressed. Each human iris has its unique visual pattern and local image features also vary from region to region, which leads to significant differences in robustness and distinctiveness among the feature codes derived from different iris regions. However, most state-of-the-art iris recognition methods use a uniform matching strategy, where features extracted from different regions of the same person or the same region for different individuals are considered to be equally important. This paper proposes a personalized iris matching strategy using a class-specific weight map learned from the training images of the same iris class. The weight map can be updated online during the iris recognition procedure when the successfully recognized iris images are regarded as the new training data. The weight map reflects the robustness of an encoding algorithm on different iris regions by assigning an appropriate weight to each feature code for iris matching. Such a weight map trained by sufficient iris templates is convergent and robust against various noise. Extensive and comprehensive experiments demonstrate that the proposed personalized iris matching strategy achieves much better iris recognition performance than uniform strategies, especially for poor quality iris images.

  9. Finger vein recognition based on finger crease location

    NASA Astrophysics Data System (ADS)

    Lu, Zhiying; Ding, Shumeng; Yin, Jing

    2016-07-01

    Finger vein recognition technology has significant advantages over other methods in terms of accuracy, uniqueness, and stability, and it has wide promising applications in the field of biometric recognition. We propose using finger creases to locate and extract an object region. Then we use linear fitting to overcome the problem of finger rotation in the plane. The method of modular adaptive histogram equalization (MAHE) is presented to enhance image contrast and reduce computational cost. To extract the finger vein features, we use a fusion method, which can obtain clear and distinguishable vein patterns under different conditions. We used the Hausdorff average distance algorithm to examine the recognition performance of the system. The experimental results demonstrate that MAHE can better balance the recognition accuracy and the expenditure of time compared with three other methods. Our resulting equal error rate throughout the total procedure was 3.268% in a database of 153 finger vein images.

  10. Real-time image restoration for iris recognition systems.

    PubMed

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  11. Artificial intelligence tools for pattern recognition

    NASA Astrophysics Data System (ADS)

    Acevedo, Elena; Acevedo, Antonio; Felipe, Federico; Avilés, Pedro

    2017-06-01

    In this work, we present a system for pattern recognition that combines the power of genetic algorithms for solving problems and the efficiency of the morphological associative memories. We use a set of 48 tire prints divided into 8 brands of tires. The images have dimensions of 200 x 200 pixels. We applied Hough transform to obtain lines as main features. The number of lines obtained is 449. The genetic algorithm reduces the number of features to ten suitable lines that give thus the 100% of recognition. Morphological associative memories were used as evaluation function. The selection algorithms were Tournament and Roulette wheel. For reproduction, we applied one-point, two-point and uniform crossover.

  12. Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.

    1985-01-01

    Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.

  13. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  14. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    NASA Astrophysics Data System (ADS)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image. Experiments conducted on various popular face databases show promising performance of the proposed algorithm in varying lighting, expression, and partial occlusion conditions. Four databases were used for testing the performance of the proposed system: Yale Face database, Extended Yale Face database B, Japanese Female Facial Expression database, and CMU AMP Facial Expression database. The experimental results in all four databases show the effectiveness of the proposed system. Also, the computation cost is lower because of the simplified calculation steps. Research work is progressing to investigate the effectiveness of the proposed face recognition method on pose-varying conditions as well. It is envisaged that a multilane approach of trained frameworks at different pose bins and an appropriate voting strategy would lead to a good recognition rate in such situation.

  15. Implementation of age and gender recognition system for intelligent digital signage

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Heon; Sohn, Myoung-Kyu; Kim, Hyunduk

    2015-12-01

    Intelligent digital signage systems transmit customized advertising and information by analyzing users and customers, unlike existing system that presented advertising in the form of broadcast without regard to type of customers. Currently, development of intelligent digital signage system has been pushed forward vigorously. In this study, we designed a system capable of analyzing gender and age of customers based on image obtained from camera, although there are many different methods for analyzing customers. We conducted age and gender recognition experiments using public database. The age/gender recognition experiments were performed through histogram matching method by extracting Local binary patterns (LBP) features after facial area on input image was normalized. The results of experiment showed that gender recognition rate was as high as approximately 97% on average. Age recognition was conducted based on categorization into 5 age classes. Age recognition rates for women and men were about 67% and 68%, respectively when that conducted separately for different gender.

  16. Feature extraction using gray-level co-occurrence matrix of wavelet coefficients and texture matching for batik motif recognition

    NASA Astrophysics Data System (ADS)

    Suciati, Nanik; Herumurti, Darlis; Wijaya, Arya Yudhi

    2017-02-01

    Batik is one of Indonesian's traditional cloth. Motif or pattern drawn on a piece of batik fabric has a specific name and philosopy. Although batik cloths are widely used in everyday life, but only few people understand its motif and philosophy. This research is intended to develop a batik motif recognition system which can be used to identify motif of Batik image automatically. First, a batik image is decomposed into sub-images using wavelet transform. Six texture descriptors, i.e. max probability, correlation, contrast, uniformity, homogenity and entropy, are extracted from gray-level co-occurrence matrix of each sub-image. The texture features are then matched to the template features using canberra distance. The experiment is performed on Batik Dataset consisting of 1088 batik images grouped into seven motifs. The best recognition rate, that is 92,1%, is achieved using feature extraction process with 5 level wavelet decomposition and 4 directional gray-level co-occurrence matrix.

  17. Mobile Diagnostics Based on Motion? A Close Look at Motility Patterns in the Schistosome Life Cycle

    PubMed Central

    Linder, Ewert; Varjo, Sami; Thors, Cecilia

    2016-01-01

    Imaging at high resolution and subsequent image analysis with modified mobile phones have the potential to solve problems related to microscopy-based diagnostics of parasitic infections in many endemic regions. Diagnostics using the computing power of “smartphones” is not restricted by limited expertise or limitations set by visual perception of a microscopist. Thus diagnostics currently almost exclusively dependent on recognition of morphological features of pathogenic organisms could be based on additional properties, such as motility characteristics recognizable by computer vision. Of special interest are infectious larval stages and “micro swimmers” of e.g., the schistosome life cycle, which infect the intermediate and definitive hosts, respectively. The ciliated miracidium, emerges from the excreted egg upon its contact with water. This means that for diagnostics, recognition of a swimming miracidium is equivalent to recognition of an egg. The motility pattern of miracidia could be defined by computer vision and used as a diagnostic criterion. To develop motility pattern-based diagnostics of schistosomiasis using simple imaging devices, we analyzed Paramecium as a model for the schistosome miracidium. As a model for invasive nematodes, such as strongyloids and filaria, we examined a different type of motility in the apathogenic nematode Turbatrix, the “vinegar eel.” The results of motion time and frequency analysis suggest that target motility may be expressed as specific spectrograms serving as “diagnostic fingerprints.” PMID:27322330

  18. Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms.

    PubMed

    Adabi, Saba; Hosseinzadeh, Matin; Noei, Shahryar; Conforto, Silvia; Daveluy, Steven; Clayton, Anne; Mehregan, Darius; Nasiriavanaki, Mohammadreza

    2017-12-20

    Currently, diagnosis of skin diseases is based primarily on the visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography (OCT) has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and, in conjunction with decision-theoretic approaches, used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue.

  19. Accurate, fast, and secure biometric fingerprint recognition system utilizing sensor fusion of fingerprint patterns

    NASA Astrophysics Data System (ADS)

    El-Saba, Aed; Alsharif, Salim; Jagapathi, Rajendarreddy

    2011-04-01

    Fingerprint recognition is one of the first techniques used for automatically identifying people and today it is still one of the most popular and effective biometric techniques. With this increase in fingerprint biometric uses, issues related to accuracy, security and processing time are major challenges facing the fingerprint recognition systems. Previous work has shown that polarization enhancementencoding of fingerprint patterns increase the accuracy and security of fingerprint systems without burdening the processing time. This is mainly due to the fact that polarization enhancementencoding is inherently a hardware process and does not have detrimental time delay effect on the overall process. Unpolarized images, however, posses a high visual contrast and when fused (without digital enhancement) properly with polarized ones, is shown to increase the recognition accuracy and security of the biometric system without any significant processing time delay.

  20. The Potential of Using Brain Images for Authentication

    PubMed Central

    Zhou, Zongtan; Shen, Hui; Hu, Dewen

    2014-01-01

    Biometric recognition (also known as biometrics) refers to the automated recognition of individuals based on their biological or behavioral traits. Examples of biometric traits include fingerprint, palmprint, iris, and face. The brain is the most important and complex organ in the human body. Can it be used as a biometric trait? In this study, we analyze the uniqueness of the brain and try to use the brain for identity authentication. The proposed brain-based verification system operates in two stages: gray matter extraction and gray matter matching. A modified brain segmentation algorithm is implemented for extracting gray matter from an input brain image. Then, an alignment-based matching algorithm is developed for brain matching. Experimental results on two data sets show that the proposed brain recognition system meets the high accuracy requirement of identity authentication. Though currently the acquisition of the brain is still time consuming and expensive, brain images are highly unique and have the potential possibility for authentication in view of pattern recognition. PMID:25126604

  1. The potential of using brain images for authentication.

    PubMed

    Chen, Fanglin; Zhou, Zongtan; Shen, Hui; Hu, Dewen

    2014-01-01

    Biometric recognition (also known as biometrics) refers to the automated recognition of individuals based on their biological or behavioral traits. Examples of biometric traits include fingerprint, palmprint, iris, and face. The brain is the most important and complex organ in the human body. Can it be used as a biometric trait? In this study, we analyze the uniqueness of the brain and try to use the brain for identity authentication. The proposed brain-based verification system operates in two stages: gray matter extraction and gray matter matching. A modified brain segmentation algorithm is implemented for extracting gray matter from an input brain image. Then, an alignment-based matching algorithm is developed for brain matching. Experimental results on two data sets show that the proposed brain recognition system meets the high accuracy requirement of identity authentication. Though currently the acquisition of the brain is still time consuming and expensive, brain images are highly unique and have the potential possibility for authentication in view of pattern recognition.

  2. Automated Coronal Loop Identification Using Digital Image Processing Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Jong K.; Gary, G. Allen; Newman, Timothy S.

    2003-01-01

    The results of a master thesis project on a study of computer algorithms for automatic identification of optical-thin, 3-dimensional solar coronal loop centers from extreme ultraviolet and X-ray 2-dimensional images will be presented. These center splines are proxies of associated magnetic field lines. The project is pattern recognition problems in which there are no unique shapes or edges and in which photon and detector noise heavily influence the images. The study explores extraction techniques using: (1) linear feature recognition of local patterns (related to the inertia-tensor concept), (2) parametric space via the Hough transform, and (3) topological adaptive contours (snakes) that constrains curvature and continuity as possible candidates for digital loop detection schemes. We have developed synthesized images for the coronal loops to test the various loop identification algorithms. Since the topology of these solar features is dominated by the magnetic field structure, a first-order magnetic field approximation using multiple dipoles provides a priori information in the identification process. Results from both synthesized and solar images will be presented.

  3. Topological image texture analysis for quality assessment

    NASA Astrophysics Data System (ADS)

    Asaad, Aras T.; Rashid, Rasber Dh.; Jassim, Sabah A.

    2017-05-01

    Image quality is a major factor influencing pattern recognition accuracy and help detect image tampering for forensics. We are concerned with investigating topological image texture analysis techniques to assess different type of degradation. We use Local Binary Pattern (LBP) as a texture feature descriptor. For any image construct simplicial complexes for selected groups of uniform LBP bins and calculate persistent homology invariants (e.g. number of connected components). We investigated image quality discriminating characteristics of these simplicial complexes by computing these models for a large dataset of face images that are affected by the presence of shadows as a result of variation in illumination conditions. Our tests demonstrate that for specific uniform LBP patterns, the number of connected component not only distinguish between different levels of shadow effects but also help detect the infected regions as well.

  4. Watershed identification of polygonal patterns in noisy SAR images.

    PubMed

    Moreels, Pierre; Smrekar, Suzanne E

    2003-01-01

    This paper describes a new approach to pattern recognition in synthetic aperture radar (SAR) images. A visual analysis of the images provided by NASA's Magellan mission to Venus has revealed a number of zones showing polygonal-shaped faults on the surface of the planet. The goal of the paper is to provide a method to automate the identification of such zones. The high level of noise in SAR images and its multiplicative nature make automated image analysis difficult and conventional edge detectors, like those based on gradient images, inefficient. We present a scheme based on an improved watershed algorithm and a two-scale analysis. The method extracts potential edges in the SAR image, analyzes the patterns obtained, and decides whether or not the image contains a "polygon area". This scheme can also be applied to other SAR or visual images, for instance in observation of Mars and Jupiter's satellite Europa.

  5. Generalization between canonical and non-canonical views in object recognition

    PubMed Central

    Ghose, Tandra; Liu, Zili

    2013-01-01

    Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692

  6. Main inherited neurodegenerative cerebellar ataxias, how to recognize them using magnetic resonance imaging?

    PubMed

    Heidelberg, Damien; Ronsin, Solene; Bonneville, Fabrice; Hannoun, Salem; Tilikete, Caroline; Cotton, François

    2018-06-16

    Ataxia is a neurodegenerative disease resulting from brainstem, cerebellar, and/or spinocerebellar tract impairments. Symptom onset could vary widely from childhood to late-adulthood. Autosomal cerebellar ataxias are considered as one of the most complex groups in neurogenetics. In addition to their genetic heterogeneity, there is an important phenotypic variability in the expression of cerebellar impairment, complicating the genetic mutation research. A pattern recognition approach using brain magnetic resonance imaging measures of atrophy, hyperintensities and iron-induced hypointensity of the dentate nuclei could be therefore helpful in guiding genetic research. This review will discuss a pattern recognition approach that, associated with the age at disease onset, and clinical manifestations, may help neuroradiologists differentiate the most frequent profiles of ataxia. Copyright © 2018. Published by Elsevier Masson SAS.

  7. Computer vision for microscopy diagnosis of malaria.

    PubMed

    Tek, F Boray; Dempster, Andrew G; Kale, Izzet

    2009-07-13

    This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.

  8. Investigating the anticipatory nature of pattern perception in sport.

    PubMed

    Gorman, Adam D; Abernethy, Bruce; Farrow, Damian

    2011-07-01

    The aim of the present study was to examine the anticipatory nature of pattern perception in sport by using static and moving basketball patterns across three different display types. Participants of differing skill levels were included in order to determine whether the effects would be moderated by the knowledge and experience of the observer in the same manner reported previously for simple images. The results from a pattern recognition task showed that both expert and recreational participants were more likely to anticipate the next likely state of a pattern when it was presented as a moving video, but only the experts appeared to have the depth of understanding required to elicit the same anticipatory encoding for patterns presented as schematic images. The results extend those reported in previous research and provide further evidence of an anticipatory encoding in pattern perception for images containing complex, interrelated patterns.

  9. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  10. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  11. Face-selective regions show invariance to linear, but not to non-linear, changes in facial images.

    PubMed

    Baseler, Heidi A; Young, Andrew W; Jenkins, Rob; Mike Burton, A; Andrews, Timothy J

    2016-12-01

    Familiar face recognition is remarkably invariant across huge image differences, yet little is understood concerning how image-invariant recognition is achieved. To investigate the neural correlates of invariance, we localized the core face-responsive regions and then compared the pattern of fMR-adaptation to different stimulus transformations in each region to behavioural data demonstrating the impact of the same transformations on familiar face recognition. In Experiment 1, we compared linear transformations of size and aspect ratio to a non-linear transformation affecting only part of the face. We found that adaptation to facial identity in face-selective regions showed invariance to linear changes, but there was no invariance to non-linear changes. In Experiment 2, we measured the sensitivity to non-linear changes that fell within the normal range of variation across face images. We found no adaptation to facial identity for any of the non-linear changes in the image, including to faces that varied in different levels of caricature. These results show a compelling difference in the sensitivity to linear compared to non-linear image changes in face-selective regions of the human brain that is only partially consistent with their effect on behavioural judgements of identity. We conclude that while regions such as the FFA may well be involved in the recognition of face identity, they are more likely to contribute to some form of normalisation that underpins subsequent recognition than to form the neural substrate of recognition per se. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Gesture recognition by instantaneous surface EMG images

    PubMed Central

    Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun

    2016-01-01

    Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses. PMID:27845347

  13. Static hand gesture recognition from a video

    NASA Astrophysics Data System (ADS)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  14. FaceIt: face recognition from static and live video for law enforcement

    NASA Astrophysics Data System (ADS)

    Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.

    1997-01-01

    Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.

  15. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  16. How color enhances visual memory for natural scenes.

    PubMed

    Spence, Ian; Wong, Patrick; Rusan, Maria; Rastegar, Naghmeh

    2006-01-01

    We offer a framework for understanding how color operates to improve visual memory for images of the natural environment, and we present an extensive data set that quantifies the contribution of color in the encoding and recognition phases. Using a continuous recognition task with colored and monochrome gray-scale images of natural scenes at short exposure durations, we found that color enhances recognition memory by conferring an advantage during encoding and by strengthening the encoding-specificity effect. Furthermore, because the pattern of performance was similar at all exposure durations, and because form and color are processed in different areas of cortex, the results imply that color must be bound as an integral part of the representation at the earliest stages of processing.

  17. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology

    PubMed Central

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767

  18. Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.

    PubMed

    Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang

    2016-01-01

    Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.

  19. Medical image classification using spatial adjacent histogram based on adaptive local binary patterns.

    PubMed

    Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling

    2016-05-01

    Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Dermoscopic patterns of Melanoma Metastases: inter-observer consistency and accuracy for metastases recognition

    PubMed Central

    Costa, J.; Ortiz-Ibañez, K.; Salerni, G.; Borges, V.; Carrera, C.; Puig, S.; Malvehy, J.

    2013-01-01

    Background Cutaneous metastases of malignant melanoma (CMMM) can be confused with other skin lesions. Dermoscopy could be helpful in the differential diagnosis. Objective To describe distinctive dermoscopic patterns that are reproducible and accurate in the identification of CMMM Methods A retrospective study of 146 dermoscopic images of CMMM from 42 patients attending a Melanoma Unit between 2002 and 2009 was performed. Firstly, two investigators established six dermoscopic patterns for CMMM. The correlation of 73 dermoscopic images with their distinctive patterns was assessed by four independent dermatologists to evaluate the reproducibility in the identification of the patterns. Finally, 163 dermoscopic images, including CMMM and non-metastatic lesions, were evaluated by the same four dermatologists to calculate the accuracy of the patterns in the recognition of CMMM. Results Five CMMM dermoscopic patterns had a good inter-observer agreement (blue nevus-like, nevus-like, angioma like, vascular and unspecific). When CMMM were classified according to these patterns, correlation between the investigators and the four dermatologists ranged from κ = 0.56 to 0.7. 71 CMMM, 16 angiomas, 22 blue nevus, 15 malignant melanoma, 11 seborrheic keratosis, 15 melanocytic nevus with globular pattern and 13 pink lesions with vascular pattern were evaluated according to the previously described CMMM dermoscopy patterns, showing an overall sensitivity of 68% (between 54.9–76%) and a specificity of 81% (between 68.6–93.5) for the diagnosis of CMMM. Conclusion Five dermoscopic patterns of CMMM with good inter-observer agreement obtained a high sensitivity and specificity in the diagnosis of metastasis, the accuracy varying according to the experience of the observer. PMID:23495915

  1. TU-C-17A-03: An Integrated Contour Evaluation Software Tool Using Supervised Pattern Recognition for Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H; Tan, J; Kavanaugh, J

    Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less

  2. Visual Recognition Software for Binary Classification and Its Application to Spruce Pollen Identification

    PubMed Central

    Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.

    2016-01-01

    Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017

  3. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals.

    PubMed

    Zhuang, Ning; Zeng, Ying; Yang, Kai; Zhang, Chi; Tong, Li; Yan, Bin

    2018-03-12

    Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods.

  4. Investigating Patterns for Self-Induced Emotion Recognition from EEG Signals

    PubMed Central

    Zeng, Ying; Yang, Kai; Tong, Li; Yan, Bin

    2018-01-01

    Most current approaches to emotion recognition are based on neural signals elicited by affective materials such as images, sounds and videos. However, the application of neural patterns in the recognition of self-induced emotions remains uninvestigated. In this study we inferred the patterns and neural signatures of self-induced emotions from electroencephalogram (EEG) signals. The EEG signals of 30 participants were recorded while they watched 18 Chinese movie clips which were intended to elicit six discrete emotions, including joy, neutrality, sadness, disgust, anger and fear. After watching each movie clip the participants were asked to self-induce emotions by recalling a specific scene from each movie. We analyzed the important features, electrode distribution and average neural patterns of different self-induced emotions. Results demonstrated that features related to high-frequency rhythm of EEG signals from electrodes distributed in the bilateral temporal, prefrontal and occipital lobes have outstanding performance in the discrimination of emotions. Moreover, the six discrete categories of self-induced emotion exhibit specific neural patterns and brain topography distributions. We achieved an average accuracy of 87.36% in the discrimination of positive from negative self-induced emotions and 54.52% in the classification of emotions into six discrete categories. Our research will help promote the development of comprehensive endogenous emotion recognition methods. PMID:29534515

  5. Pattern recognition for cache management in distributed medical imaging environments.

    PubMed

    Viana-Ferreira, Carlos; Ribeiro, Luís; Matos, Sérgio; Costa, Carlos

    2016-02-01

    Traditionally, medical imaging repositories have been supported by indoor infrastructures with huge operational costs. This paradigm is changing thanks to cloud outsourcing which not only brings technological advantages but also facilitates inter-institutional workflows. However, communication latency is one main problem in this kind of approaches, since we are dealing with tremendous volumes of data. To minimize the impact of this issue, cache and prefetching are commonly used. The effectiveness of these mechanisms is highly dependent on their capability of accurately selecting the objects that will be needed soon. This paper describes a pattern recognition system based on artificial neural networks with incremental learning to evaluate, from a set of usage pattern, which one fits the user behavior at a given time. The accuracy of the pattern recognition model in distinct training conditions was also evaluated. The solution was tested with a real-world dataset and a synthesized dataset, showing that incremental learning is advantageous. Even with very immature initial models, trained with just 1 week of data samples, the overall accuracy was very similar to the value obtained when using 75% of the long-term data for training the models. Preliminary results demonstrate an effective reduction in communication latency when using the proposed solution to feed a prefetching mechanism. The proposed approach is very interesting for cache replacement and prefetching policies due to the good results obtained since the first deployment moments.

  6. Chemometrics.

    ERIC Educational Resources Information Center

    Delaney, Michael F.

    1984-01-01

    This literature review on chemometrics (covering December 1981 to December 1983) is organized under these headings: personal supermicrocomputers; education and books; statistics; modeling and parameter estimation; resolution; calibration; signal processing; image analysis; factor analysis; pattern recognition; optimization; artificial…

  7. Pollen Image Recognition Based on DGDB-LBP Descriptor

    NASA Astrophysics Data System (ADS)

    Han, L. P.; Xie, Y. H.

    2018-01-01

    In this paper, we propose DGDB-LBP, a local binary pattern descriptor based on the pixel blocks in the dominant gradient direction. Differing from traditional LBP and its variants, DGDB-LBP encodes by comparing the main gradient magnitude of each block rather than the single pixel value or the average of pixel blocks, in doing so, it reduces the influence of noise on pollen images and eliminates redundant and non-informative features. In order to fully describe the texture features of pollen images and analyze it under multi-scales, we propose a new sampling strategy, which uses three types of operators to extract the radial, angular and multiple texture features under different scales. Considering that the pollen images have some degree of rotation under the microscope, we propose the adaptive encoding direction, which is determined by the texture distribution of local region. Experimental results on the Pollenmonitor dataset show that the average correct recognition rate of our method is superior to other pollen recognition methods in recent years.

  8. Biochip microsystem for bioinformatics recognition and analysis

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng (Inventor); Fang, Wai-Chi (Inventor)

    2011-01-01

    A system with applications in pattern recognition, or classification, of DNA assay samples. Because DNA reference and sample material in wells of an assay may be caused to fluoresce depending upon dye added to the material, the resulting light may be imaged onto an embodiment comprising an array of photodetectors and an adaptive neural network, with applications to DNA analysis. Other embodiments are described and claimed.

  9. On the recognition of complex structures: Computer software using artificial intelligence applied to pattern recognition

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.

    1974-01-01

    An approach to simultaneous interpretation of objects in complex structures so as to maximize a combined utility function is presented. Results of the application of a computer software system to assign meaning to regions in a segmented image based on the principles described in this paper and on a special interactive sequential classification learning system, which is referenced, are demonstrated.

  10. Similarity analysis between quantum images

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  11. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324

  12. Structural Pattern Recognition Techniques for Data Retrieval in Massive Fusion Databases

    NASA Astrophysics Data System (ADS)

    Vega, J.; Murari, A.; Rattá, G. A.; Castro, P.; Pereira, A.; Portas, A.

    2008-03-01

    Diagnostics of present day reactor class fusion experiments, like the Joint European Torus (JET), generate thousands of signals (time series and video images) in each discharge. There is a direct correspondence between the physical phenomena taking place in the plasma and the set of structural shapes (patterns) that they form in the signals: bumps, unexpected amplitude changes, abrupt peaks, periodic components, high intensity zones or specific edge contours. A major difficulty related to data analysis is the identification, in a rapid and automated way, of a set of discharges with comparable behavior, i.e. discharges with "similar" patterns. Pattern recognition techniques are efficient tools to search for similar structural forms within the database in a fast an intelligent way. To this end, classification systems must be developed to be used as indexation methods to directly fetch the more similar patterns.

  13. Folk Dance Pattern Recognition Over Depth Images Acquired via Kinect Sensor

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Grammatikopoulou, A.; Doulamis, A.; Grammalidis, N.

    2017-02-01

    The possibility of accurate recognition of folk dance patterns is investigated in this paper. System inputs are raw skeleton data, provided by a low cost sensor. In particular, data were obtained by monitoring three professional dancers, using a Kinect II sensor. A set of six traditional Greek dances (without their variations) consists the investigated data. A two-step process was adopted. At first, the most descriptive skeleton data were selected using a combination of density based and sparse modelling algorithms. Then, the representative data served as training set for a variety of classifiers.

  14. Functional Magnetic Resonance Imaging of Cognitive Processing in Young Adults with Down Syndrome

    ERIC Educational Resources Information Center

    Jacola, Lisa M.; Byars, Anna W.; Chalfonte-Evans, Melinda; Schmithorst, Vincent J.; Hickey, Fran; Patterson, Bonnie; Hotze, Stephanie; Vannest, Jennifer; Chiu, Chung-Yiu; Holland, Scott K.; Schapiro, Mark B.

    2011-01-01

    The authors used functional magnetic resonance imaging (fMRI) to investigate neural activation during a semantic-classification/object-recognition task in 13 persons with Down syndrome and 12 typically developing control participants (age range = 12-26 years). A comparison between groups suggested atypical patterns of brain activation for the…

  15. Age-related increases in false recognition: the role of perceptual and conceptual similarity.

    PubMed

    Pidgeon, Laura M; Morcom, Alexa M

    2014-01-01

    Older adults (OAs) are more likely to falsely recognize novel events than young adults, and recent behavioral and neuroimaging evidence points to a reduced ability to distinguish overlapping information due to decline in hippocampal pattern separation. However, other data suggest a critical role for semantic similarity. Koutstaal et al. [(2003) false recognition of abstract vs. common objects in older and younger adults: testing the semantic categorization account, J. Exp. Psychol. Learn. 29, 499-510] reported that OAs were only vulnerable to false recognition of items with pre-existing semantic representations. We replicated Koutstaal et al.'s (2003) second experiment and examined the influence of independently rated perceptual and conceptual similarity between stimuli and lures. At study, young and OAs judged the pleasantness of pictures of abstract (unfamiliar) and concrete (familiar) items, followed by a surprise recognition test including studied items, similar lures, and novel unrelated items. Experiment 1 used dichotomous "old/new" responses at test, while in Experiment 2 participants were also asked to judge lures as "similar," to increase explicit demands on pattern separation. In both experiments, OAs showed a greater increase in false recognition for concrete than abstract items relative to the young, replicating Koutstaal et al.'s (2003) findings. However, unlike in the earlier study, there was also an age-related increase in false recognition of abstract lures when multiple similar images had been studied. In line with pattern separation accounts of false recognition, OAs were more likely to misclassify concrete lures with high and moderate, but not low degrees of rated similarity to studied items. Results are consistent with the view that OAs are particularly susceptible to semantic interference in recognition memory, and with the possibility that this reflects age-related decline in pattern separation.

  16. Age-related increases in false recognition: the role of perceptual and conceptual similarity

    PubMed Central

    Pidgeon, Laura M.; Morcom, Alexa M.

    2014-01-01

    Older adults (OAs) are more likely to falsely recognize novel events than young adults, and recent behavioral and neuroimaging evidence points to a reduced ability to distinguish overlapping information due to decline in hippocampal pattern separation. However, other data suggest a critical role for semantic similarity. Koutstaal et al. [(2003) false recognition of abstract vs. common objects in older and younger adults: testing the semantic categorization account, J. Exp. Psychol. Learn. 29, 499–510] reported that OAs were only vulnerable to false recognition of items with pre-existing semantic representations. We replicated Koutstaal et al.’s (2003) second experiment and examined the influence of independently rated perceptual and conceptual similarity between stimuli and lures. At study, young and OAs judged the pleasantness of pictures of abstract (unfamiliar) and concrete (familiar) items, followed by a surprise recognition test including studied items, similar lures, and novel unrelated items. Experiment 1 used dichotomous “old/new” responses at test, while in Experiment 2 participants were also asked to judge lures as “similar,” to increase explicit demands on pattern separation. In both experiments, OAs showed a greater increase in false recognition for concrete than abstract items relative to the young, replicating Koutstaal et al.’s (2003) findings. However, unlike in the earlier study, there was also an age-related increase in false recognition of abstract lures when multiple similar images had been studied. In line with pattern separation accounts of false recognition, OAs were more likely to misclassify concrete lures with high and moderate, but not low degrees of rated similarity to studied items. Results are consistent with the view that OAs are particularly susceptible to semantic interference in recognition memory, and with the possibility that this reflects age-related decline in pattern separation. PMID:25368576

  17. [Brain Organization of the Preparation for Visual Recognition in Preadolescent Children].

    PubMed

    Farber, D A; Kurganskii, A V; Petrenko, N E

    2015-01-01

    The brain organization of the process of preparation for the perception of incomplete images fragmented to different extents. The functional connections of ventrolateral and dorsoventral cortical zones with other zones in 10-11-year-old and 11-12-year-old children were studied at three successive stages of the preparation for the perception of incomplete images. These data were compared with those obtained for adults. In order to reveal the effect of preparatory processes on the image recognition, we also analyzed the regional event-related potentials. In adults, the functional interaction between dorsolateral and ventrolateral prefrontal cortex and other cortical zones of the right hemisphere was found to be enhanced at the stage of waiting for not-yet-recognizable image, while in the left hemisphere the links became stronger shortly before the successful recognition of a stimulus. In children the stage-related changes in functional interactions are similar in both hemispheres, with peak of interaction activity.at the stage preceding the successful recognition. It was found that in 11-12-year-old children the ventrolateral cortex is involved in both preparatory stage and recognition processes to a smaller extent as compared with adults and 10-11-year-old children. At the same time, the group of 11-12-year-old children had more mature pattern of the dorsolateral cortex involvement, which provided more effective recognition of incomplete images in this group as compared with 10-11-year-old children. It is suggested that the features of the brain organization of visual recognition and preceding preparatory processes in 11-12-year-old children are caused by multidirectional effects of sex hormones on the functioning of different zones of the prefrontal cortex at early stages of sexual maturation.

  18. Higher-order neural network software for distortion invariant object recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly

    1991-01-01

    The state-of-the-art in pattern recognition for such applications as automatic target recognition and industrial robotic vision relies on digital image processing. We present a higher-order neural network model and software which performs the complete feature extraction-pattern classification paradigm required for automatic pattern recognition. Using a third-order neural network, we demonstrate complete, 100 percent accurate invariance to distortions of scale, position, and in-plate rotation. In a higher-order neural network, feature extraction is built into the network, and does not have to be learned. Only the relatively simple classification step must be learned. This is key to achieving very rapid training. The training set is much smaller than with standard neural network software because the higher-order network only has to be shown one view of each object to be learned, not every possible view. The software and graphical user interface run on any Sun workstation. Results of the use of the neural software in autonomous robotic vision systems are presented. Such a system could have extensive application in robotic manufacturing.

  19. Artificial Neural Networks for Processing Graphs with Application to Image Understanding: A Survey

    NASA Astrophysics Data System (ADS)

    Bianchini, Monica; Scarselli, Franco

    In graphical pattern recognition, each data is represented as an arrangement of elements, that encodes both the properties of each element and the relations among them. Hence, patterns are modelled as labelled graphs where, in general, labels can be attached to both nodes and edges. Artificial neural networks able to process graphs are a powerful tool for addressing a great variety of real-world problems, where the information is naturally organized in entities and relationships among entities and, in fact, they have been widely used in computer vision, f.i. in logo recognition, in similarity retrieval, and for object detection. In this chapter, we propose a survey of neural network models able to process structured information, with a particular focus on those architectures tailored to address image understanding applications. Starting from the original recursive model (RNNs), we subsequently present different ways to represent images - by trees, forests of trees, multiresolution trees, directed acyclic graphs with labelled edges, general graphs - and, correspondingly, neural network architectures appropriate to process such structures.

  20. Extended depth of field system for long distance iris acquisition

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Lin; Hsieh, Sheng-Hsun; Hung, Kuo-En; Yang, Shi-Wen; Li, Yung-Hui; Tien, Chung-Hao

    2012-10-01

    Using biometric signatures for identity recognition has been practiced for centuries. Recently, iris recognition system attracts much attention due to its high accuracy and high stability. The texture feature of iris provides a signature that is unique for each subject. Currently most commercial iris recognition systems acquire images in less than 50 cm, which is a serious constraint that needs to be broken if we want to use it for airport access or entrance that requires high turn-over rate . In order to capture the iris patterns from a distance, in this study, we developed a telephoto imaging system with image processing techniques. By using the cubic phase mask positioned front of the camera, the point spread function was kept constant over a wide range of defocus. With adequate decoding filter, the blurred image was restored, where the working distance between the subject and the camera can be achieved over 3m associated with 500mm focal length and aperture F/6.3. The simulation and experimental results validated the proposed scheme, where the depth of focus of iris camera was triply extended over the traditional optics, while keeping sufficient recognition accuracy.

  1. Apparatus and method for identification and recognition of an item with ultrasonic patterns from item subsurface micro-features

    DOEpatents

    Perkins, Richard W.; Fuller, James L.; Doctor, Steven R.; Good, Morris S.; Heasler, Patrick G.; Skorpik, James R.; Hansen, Norman H.

    1995-01-01

    The present invention is a means and method for identification and recognition of an item by ultrasonic imaging of material microfeatures and/or macrofeatures within the bulk volume of a material. The invention is based upon ultrasonic interrogation and imaging of material microfeatures within the body of material by accepting only reflected ultrasonic energy from a preselected plane or volume within the material. An initial interrogation produces an identification reference. Subsequent new scans are statistically compared to the identification reference for making a match/non-match decision.

  2. Achromatical Optical Correlator

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Liu, Hua-Kuang

    1989-01-01

    Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.

  3. Apparatus and method for identification and recognition of an item with ultrasonic patterns from item subsurface micro-features

    DOEpatents

    Perkins, R.W.; Fuller, J.L.; Doctor, S.R.; Good, M.S.; Heasler, P.G.; Skorpik, J.R.; Hansen, N.H.

    1995-09-26

    The present invention is a means and method for identification and recognition of an item by ultrasonic imaging of material microfeatures and/or macrofeatures within the bulk volume of a material. The invention is based upon ultrasonic interrogation and imaging of material microfeatures within the body of material by accepting only reflected ultrasonic energy from a preselected plane or volume within the material. An initial interrogation produces an identification reference. Subsequent new scans are statistically compared to the identification reference for making a match/non-match decision. 15 figs.

  4. Robust kernel representation with statistical local features for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David

    2013-06-01

    Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.

  5. Handwritten-word spotting using biologically inspired features.

    PubMed

    van der Zant, Tijn; Schomaker, Lambert; Haak, Koen

    2008-11-01

    For quick access to new handwritten collections, current handwriting recognition methods are too cumbersome. They cannot deal with the lack of labeled data and would require extensive laboratory training for each individual script, style, language and collection. We propose a biologically inspired whole-word recognition method which is used to incrementally elicit word labels in a live, web-based annotation system, named Monk. Since human labor should be minimized given the massive amount of image data, it becomes important to rely on robust perceptual mechanisms in the machine. Recent computational models of the neuro-physiology of vision are applied to isolated word classification. A primate cortex-like mechanism allows to classify text-images that have a low frequency of occurrence. Typically these images are the most difficult to retrieve and often contain named entities and are regarded as the most important to people. Usually standard pattern-recognition technology cannot deal with these text-images if there are not enough labeled instances. The results of this retrieval system are compared to normalized word-image matching and appear to be very promising.

  6. Ordinal measures for iris recognition.

    PubMed

    Sun, Zhenan; Tan, Tieniu

    2009-12-01

    Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.

  7. Uyghur face recognition method combining 2DDCT with POEM

    NASA Astrophysics Data System (ADS)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  8. Leukocyte Recognition Using EM-Algorithm

    NASA Astrophysics Data System (ADS)

    Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.

    This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.

  9. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    PubMed Central

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (p<.05, r=.44) and more accurate at identifying disgust (p<.05, r=.39). OA fixated less than YA on the top half of the face for disgust, fearful, happy, neutral, and sad faces (p’s<.05, r’s≥.38), whereas there was no group difference for landscapes. For OA, executive function was correlated with recognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  10. Optical pattern recognition algorithms on neural-logic equivalent models and demonstration of their prospects and possible implementations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.

    2001-03-01

    Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).

  11. Enhancing the discrimination accuracy between metastases, gliomas and meningiomas on brain MRI by volumetric textural features and ensemble pattern recognition methods.

    PubMed

    Georgiadis, Pantelis; Cavouras, Dionisis; Kalatzis, Ioannis; Glotsos, Dimitris; Athanasiadis, Emmanouil; Kostopoulos, Spiros; Sifaki, Koralia; Malamas, Menelaos; Nikiforidis, George; Solomou, Ekaterini

    2009-01-01

    Three-dimensional (3D) texture analysis of volumetric brain magnetic resonance (MR) images has been identified as an important indicator for discriminating among different brain pathologies. The purpose of this study was to evaluate the efficiency of 3D textural features using a pattern recognition system in the task of discriminating benign, malignant and metastatic brain tissues on T1 postcontrast MR imaging (MRI) series. The dataset consisted of 67 brain MRI series obtained from patients with verified and untreated intracranial tumors. The pattern recognition system was designed as an ensemble classification scheme employing a support vector machine classifier, specially modified in order to integrate the least squares features transformation logic in its kernel function. The latter, in conjunction with using 3D textural features, enabled boosting up the performance of the system in discriminating metastatic, malignant and benign brain tumors with 77.14%, 89.19% and 93.33% accuracy, respectively. The method was evaluated using an external cross-validation process; thus, results might be considered indicative of the generalization performance of the system to "unseen" cases. The proposed system might be used as an assisting tool for brain tumor characterization on volumetric MRI series.

  12. Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1981-04-01

    UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface

  13. Infrared Spectroscopic Imaging for Prostate Pathology Practice

    DTIC Science & Technology

    2011-04-01

    features – geometric properties of epithelial cells/nuclei and lumens – that are quantified based on H&E stained images as well as FT-IR images of...the samples. By restricting the features used to geometric measures, we sought to mimic the pattern recognition process employed by human experts, and...relatively dark and can be modeled as small elliptical areas in the stained images. This geometrical model is often confounded as multiple nuclei can be

  14. Short memory fuzzy fusion image recognition schema employing spatial and Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Raptis, Sotiris N.; Tzafestas, Spyros G.

    2001-03-01

    Single images quite often do not bear enough information for precise interpretation due to a variety of reasons. Multiple image fusion and adequate integration recently became the state of the art in the pattern recognition field. In this paper presented here and enhanced multiple observation schema is discussed investigating improvements to the baseline fuzzy- probabilistic image fusion methodology. The first innovation introduced consists in considering only a limited but seemingly ore effective part of the uncertainty information obtained by a certain time restricting older uncertainty dependencies and alleviating computational burden that is now needed for short sequence (stored into memory) of samples. The second innovation essentially grouping them into feature-blind object hypotheses. Experiment settings include a sequence of independent views obtained by camera being moved around the investigated object.

  15. Syntactic methods of shape feature description and its application in analysis of medical images

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Tadeusiewicz, Ryszard

    2000-02-01

    The paper presents specialist algorithms of morphologic analysis of shapes of selected organs of abdominal cavity proposed in order to diagnose disease symptoms occurring in the main pancreatic ducts and upper segments of ureters. Analysis of the correct morphology of these structures has been conducted with the use of syntactic methods of pattern recognition. Its main objective is computer-aided support to early diagnosis of neoplastic lesions and pancreatitis based on images taken in the course of examination with the endoscopic retrograde cholangiopancreatography (ERCP) method and a diagnosis of morphological lesions in ureter based on kidney radiogram analysis. In the analysis of ERCP images, the main objective is to recognize morphological lesions in pancreas ducts characteristic for carcinoma and chronic pancreatitis. In the case of kidney radiogram analysis the aim is to diagnose local irregularity of ureter lumen. Diagnosing the above mentioned lesion has been conducted with the use of syntactic methods of pattern recognition, in particular the languages of shape features description and context-free attributed grammars. These methods allow to recognize and describe in a very efficient way the aforementioned lesions on images obtained as a result of initial image processing into diagrams of widths of the examined structures.

  16. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  17. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  18. Start-ups Bring AI to Pathology.

    PubMed

    2018-04-01

    New startups are developing pattern-recognition algorithms that could one day help pathologists more accurately spot tumors on digitized tissue images, thereby aiding in diagnosis, treatment, drug discovery, and more. ©2018 American Association for Cancer Research.

  19. Verification Image of The Veins on The Back Palm with Modified Local Line Binary Pattern (MLLBP) and Histogram

    NASA Astrophysics Data System (ADS)

    Prijono, Agus; Darmawan Hangkawidjaja, Aan; Ratnadewi; Saleh Ahmar, Ansari

    2018-01-01

    The verification to person who is used today as a fingerprint, signature, personal identification number (PIN) in the bank system, identity cards, attendance, easily copied and forged. This causes the system not secure and is vulnerable to unauthorized persons to access the system. In this research will be implemented verification system using the image of the blood vessels in the back of the palms as recognition more difficult to imitate because it is located inside the human body so it is safer to use. The blood vessels located at the back of the human hand is unique, even humans twins have a different image of the blood vessels. Besides the image of the blood vessels do not depend on a person’s age, so it can be used for long term, except in the case of an accident, or disease. Because of the unique vein pattern recognition can be used in a person. In this paper, we used a modification method to perform the introduction of a person based on the image of the blood vessel that is using Modified Local Line Binary Pattern (MLLBP). The process of matching blood vessel image feature extraction using Hamming Distance. Test case of verification is done by calculating the percentage of acceptance of the same person. Rejection error occurs if a person was not matched by the system with the data itself. The 10 person with 15 image compared to 5 image vein for each person is resulted 80,67% successful Another test case of the verification is done by verified two image from different person that is forgery, and the verification will be true if the system can rejection the image forgery. The ten different person is not verified and the result is obtained 94%.

  20. Improving Face Verification in Photo Albums by Combining Facial Recognition and Metadata With Cross-Matching

    DTIC Science & Technology

    2017-12-01

    satisfactory performance. We do not use statistical models, and we do not create patterns that require supervised learning. Our methodology is intended...statistical models, and we do not create patterns that require supervised learning. Our methodology is intended for use in personal digital image...THESIS MOTIVATION .........................................................................19 III. METHODOLOGY

  1. Hand biometric recognition based on fused hand geometry and vascular patterns.

    PubMed

    Park, GiTae; Kim, Soowon

    2013-02-28

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%.

  2. Hand Biometric Recognition Based on Fused Hand Geometry and Vascular Patterns

    PubMed Central

    Park, GiTae; Kim, Soowon

    2013-01-01

    A hand biometric authentication method based on measurements of the user's hand geometry and vascular pattern is proposed. To acquire the hand geometry, the thickness of the side view of the hand, the K-curvature with a hand-shaped chain code, the lengths and angles of the finger valleys, and the lengths and profiles of the fingers were used, and for the vascular pattern, the direction-based vascular-pattern extraction method was used, and thus, a new multimodal biometric approach is proposed. The proposed multimodal biometric system uses only one image to extract the feature points. This system can be configured for low-cost devices. Our multimodal biometric-approach hand-geometry (the side view of the hand and the back of hand) and vascular-pattern recognition method performs at the score level. The results of our study showed that the equal error rate of the proposed system was 0.06%. PMID:23449119

  3. Survey of Technologies for the Airport Border of the Future

    DTIC Science & Technology

    2014-04-01

    geometry Handwriting recognition ID cards Image classification Image enhancement Image fusion Image matching Image processing Image segmentation Iris...00 Tongue print Footstep recognition Odour recognition Retinal recognition Emotion recognition Periocular recognition Handwriting recognition Ear...recognition Palmprint recognition Hand geometry DNA matching Vein matching Ear recognition Handwriting recognition Periocular recognition Emotion

  4. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors

    PubMed Central

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-01-01

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms. PMID:26198233

  5. An Autonomous Star Identification Algorithm Based on One-Dimensional Vector Pattern for Star Sensors.

    PubMed

    Luo, Liyan; Xu, Luping; Zhang, Hua

    2015-07-07

    In order to enhance the robustness and accelerate the recognition speed of star identification, an autonomous star identification algorithm for star sensors is proposed based on the one-dimensional vector pattern (one_DVP). In the proposed algorithm, the space geometry information of the observed stars is used to form the one-dimensional vector pattern of the observed star. The one-dimensional vector pattern of the same observed star remains unchanged when the stellar image rotates, so the problem of star identification is simplified as the comparison of the two feature vectors. The one-dimensional vector pattern is adopted to build the feature vector of the star pattern, which makes it possible to identify the observed stars robustly. The characteristics of the feature vector and the proposed search strategy for the matching pattern make it possible to achieve the recognition result as quickly as possible. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition accuracy and robustness by the proposed algorithm are better than those by the pyramid algorithm, the modified grid algorithm, and the LPT algorithm. The theoretical analysis and experimental results show that the proposed algorithm outperforms the other three star identification algorithms.

  6. Symposium on Machine Processing of Remotely Sensed Data, Purdue University, West Lafayette, Ind., June 29-July 1, 1976, Proceedings

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Papers are presented on the applicability of Landsat data to water management and control needs, IBIS, a geographic information system based on digital image processing and image raster datatype, and the Image Data Access Method (IDAM) for the Earth Resources Interactive Processing System. Attention is also given to the Prototype Classification and Mensuration System (PROCAMS) applied to agricultural data, the use of Landsat for water quality monitoring in North Carolina, and the analysis of geophysical remote sensing data using multivariate pattern recognition. The Illinois crop-acreage estimation experiment, the Pacific Northwest Resources Inventory Demonstration, and the effects of spatial misregistration on multispectral recognition are also considered. Individual items are announced in this issue.

  7. Pattern recognition tool based on complex network-based approach

    NASA Astrophysics Data System (ADS)

    Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir

    2013-02-01

    This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.

  8. Breast Cancer Diagnostics Based on Spatial Genome Organization

    DTIC Science & Technology

    2012-07-01

    using an already established imaging tool, called NMFA-FLO (Nuclei Manual and FISH automatic). In order to achieve accurate segmentation of nuclei...in tissue we used an artificial neuronal network (ANN)-based supervised pattern recognition approach to screen out well segmented nuclei, after image ... segmentation used to process images for automated nuclear segmentation . Part a) has been adapted from [15] and b) from [16]. Figure 4. Comparison of

  9. Robust image region descriptor using local derivative ordinal binary pattern

    NASA Astrophysics Data System (ADS)

    Shang, Jun; Chen, Chuanbo; Pei, Xiaobing; Liang, Hu; Tang, He; Sarem, Mudar

    2015-05-01

    Binary image descriptors have received a lot of attention in recent years, since they provide numerous advantages, such as low memory footprint and efficient matching strategy. However, they utilize intermediate representations and are generally less discriminative than floating-point descriptors. We propose an image region descriptor, namely local derivative ordinal binary pattern, for object recognition and image categorization. In order to preserve more local contrast and edge information, we quantize the intensity differences between the central pixels and their neighbors of the detected local affine covariant regions in an adaptive way. These differences are then sorted and mapped into binary codes and histogrammed with a weight of the sum of the absolute value of the differences. Furthermore, the gray level of the central pixel is quantized to further improve the discriminative ability. Finally, we combine them to form a joint histogram to represent the features of the image. We observe that our descriptor preserves more local brightness and edge information than traditional binary descriptors. Also, our descriptor is robust to rotation, illumination variations, and other geometric transformations. We conduct extensive experiments on the standard ETHZ and Kentucky datasets for object recognition and PASCAL for image classification. The experimental results show that our descriptor outperforms existing state-of-the-art methods.

  10. Pattern Recognition Of Blood Vessel Networks In Ocular Fundus Images

    NASA Astrophysics Data System (ADS)

    Akita, K.; Kuga, H.

    1982-11-01

    We propose a computer method of recognizing blood vessel networks in color ocular fundus images which are used in the mass diagnosis of adult diseases such as hypertension and diabetes. A line detection algorithm is applied to extract the blood vessels, and the skeleton patterns of them are made to analyze and describe their structures. The recognition of line segments of arteries and/or veins in the vessel networks consists of three stages. First, a few segments which satisfy a certain constraint are picked up and discriminated as arteries or veins. This is the initial labeling. Then the remaining unknown ones are labeled by utilizing the physical level knowledge. We propose two schemes for this stage : a deterministic labeling and a probabilistic relaxation labeling. Finally the label of each line segment is checked so as to minimize the total number of labeling contradictions. Some experimental results are also presented.

  11. Pattern recognition of concrete surface cracks and defects using integrated image processing algorithms

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Hortinela, Carlos C.; Garcia, Ramon G.; Baylon, Sunnycille; Ignacio, Alexander Joshua; Rivera, Marco Antonio; Sebastian, Jaimie

    2017-06-01

    Pattern recognition of concrete surface crack defects is very important in determining stability of structure like building, roads or bridges. Surface crack is one of the subjects in inspection, diagnosis, and maintenance as well as life prediction for the safety of the structures. Traditionally determining defects and cracks on concrete surfaces are done manually by inspection. Moreover, any internal defects on the concrete would require destructive testing for detection. The researchers created an automated surface crack detection for concrete using image processing techniques including Hough transform, LoG weighted, Dilation, Grayscale, Canny Edge Detection and Haar Wavelet Transform. An automatic surface crack detection robot is designed to capture the concrete surface by sectoring method. Surface crack classification was done with the use of Haar trained cascade object detector that uses both positive samples and negative samples which proved that it is possible to effectively identify the surface crack defects.

  12. Holography and optical information processing; Proceedings of the Soviet-Chinese Joint Seminar, Bishkek, Kyrgyzstan, Sept. 21-26, 1991

    NASA Astrophysics Data System (ADS)

    Mikaelian, Andrei L.

    Attention is given to data storage, devices, architectures, and implementations of optical memory and neural networks; holographic optical elements and computer-generated holograms; holographic display and materials; systems, pattern recognition, interferometry, and applications in optical information processing; and special measurements and devices. Topics discussed include optical immersion as a new way to increase information recording density, systems for data reading from optical disks on the basis of diffractive lenses, a new real-time optical associative memory system, an optical pattern recognition system based on a WTA model of neural networks, phase diffraction grating for the integral transforms of coherent light fields, holographic recording with operated sensitivity and stability in chalcogenide glass layers, a compact optical logic processor, a hybrid optical system for computing invariant moments of images, optical fiber holographic inteferometry, and image transmission through random media in single pass via optical phase conjugation.

  13. Satellite classification and segmentation using non-additive entropy

    NASA Astrophysics Data System (ADS)

    Assirati, Lucas; Souto Martinez, Alexandre; Martinez Bruno, Odemir

    2014-03-01

    Here we compare the Boltzmann-Gibbs-Shannon (standard) with the Tsallis entropy on the pattern recognition and segmentation of colored images obtained by satellites, via "Google Earth". By segmentation we mean particionate an image to locate regions of interest. Here, we discriminate and define an image partition classes according to a training basis. This training basis consists of three pattern classes: aquatic, urban and vegetation regions. Our numerical experiments demonstrate that the Tsallis entropy, used as a feature vector composed of distinct entropic indexes q outperforms the standard entropy. There are several applications of our proposed methodology, once satellite images can be used to monitor migration form rural to urban regions, agricultural activities, oil spreading on the ocean etc.

  14. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  15. Trends in Correlation-Based Pattern Recognition and Tracking in Forward-Looking Infrared Imagery

    PubMed Central

    Alam, Mohammad S.; Bhuiyan, Sharif M. A.

    2014-01-01

    In this paper, we review the recent trends and advancements on correlation-based pattern recognition and tracking in forward-looking infrared (FLIR) imagery. In particular, we discuss matched filter-based correlation techniques for target detection and tracking which are widely used for various real time applications. We analyze and present test results involving recently reported matched filters such as the maximum average correlation height (MACH) filter and its variants, and distance classifier correlation filter (DCCF) and its variants. Test results are presented for both single/multiple target detection and tracking using various real-life FLIR image sequences. PMID:25061840

  16. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  17. Pattern recognition of satellite cloud imagery for improved weather prediction

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.

    1986-01-01

    The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.

  18. SVD compression for magnetic resonance fingerprinting in the time domain.

    PubMed

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  19. Compact holographic optical neural network system for real-time pattern recognition

    NASA Astrophysics Data System (ADS)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  20. SVD Compression for Magnetic Resonance Fingerprinting in the Time Domain

    PubMed Central

    McGivney, Debra F.; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A.

    2016-01-01

    Magnetic resonance fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition (SVD), which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously. PMID:25029380

  1. MToS: A Tree of Shapes for Multivariate Images.

    PubMed

    Carlinet, Edwin; Géraud, Thierry

    2015-12-01

    The topographic map of a gray-level image, also called tree of shapes, provides a high-level hierarchical representation of the image contents. This representation, invariant to contrast changes and to contrast inversion, has been proved very useful to achieve many image processing and pattern recognition tasks. Its definition relies on the total ordering of pixel values, so this representation does not exist for color images, or more generally, multivariate images. Common workarounds, such as marginal processing, or imposing a total order on data, are not satisfactory and yield many problems. This paper presents a method to build a tree-based representation of multivariate images, which features marginally the same properties of the gray-level tree of shapes. Briefly put, we do not impose an arbitrary ordering on values, but we only rely on the inclusion relationship between shapes in the image definition domain. The interest of having a contrast invariant and self-dual representation of multivariate image is illustrated through several applications (filtering, segmentation, and object recognition) on different types of data: color natural images, document images, satellite hyperspectral imaging, multimodal medical imaging, and videos.

  2. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    NASA Astrophysics Data System (ADS)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  3. Fusion of Multiple Sensing Modalities for Machine Vision

    DTIC Science & Technology

    1994-05-31

    Modeling of Non-Homogeneous 3-D Objects for Thermal and Visual Image Synthesis," Pattern Recognition, in press. U [11] Nair, Dinesh , and J. K. Aggarwal...20th AIPR Workshop: Computer Vision--Meeting the Challenges, McLean, Virginia, October 1991. Nair, Dinesh , and J. K. Aggarwal, "An Object Recognition...Computer Engineering August 1992 Sunil Gupta Ph.D. Student Mohan Kumar M.S. Student Sandeep Kumar M.S. Student Xavier Lebegue Ph.D., Computer

  4. Movement cues aid face recognition in developmental prosopagnosia.

    PubMed

    Bennetts, Rachel J; Butcher, Natalie; Lander, Karen; Udale, Robert; Bate, Sarah

    2015-11-01

    Seeing a face in motion can improve face recognition in the general population, and studies of face matching indicate that people with face recognition difficulties (developmental prosopagnosia; DP) may be able to use movement cues as a supplementary strategy to help them process faces. However, the use of facial movement cues in DP has not been examined in the context of familiar face recognition. This study examined whether people with DP were better at recognizing famous faces presented in motion, compared to static. Nine participants with DP and 14 age-matched controls completed a famous face recognition task. Each face was presented twice across 2 blocks: once in motion and once as a still image. Discriminability (A) was calculated for each block. Participants with DP showed a significant movement advantage overall. This was driven by a movement advantage in the first block, but not in the second block. Participants with DP were significantly worse than controls at identifying faces from static images, but there was no difference between those with DP and controls for moving images. Seeing a familiar face in motion can improve face recognition in people with DP, at least in some circumstances. The mechanisms behind this effect are unclear, but these results suggest that some people with DP are able to learn and recognize patterns of facial motion, and movement can act as a useful cue when face recognition is impaired. (c) 2015 APA, all rights reserved).

  5. Modeling of biologically motivated self-learning equivalent-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for image fragments clustering and recognition

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2018-03-01

    The biologically-motivated self-learning equivalence-convolutional recurrent-multilayer neural structures (BLM_SL_EC_RMNS) for fragments images clustering and recognition will be discussed. We shall consider these neural structures and their spatial-invariant equivalental models (SIEMs) based on proposed equivalent two-dimensional functions of image similarity and the corresponding matrix-matrix (or tensor) procedures using as basic operations of continuous logic and nonlinear processing. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalent weighing of input patterns. We show that these SL_EC_RMNSs have several advantages, such as the self-study and self-identification of features and signs of the similarity of fragments, ability to clustering and recognize of image fragments with best efficiency and strong mutual correlation. The proposed combined with learning-recognition clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively continuous logic and nonlinear processing algorithms and to k-average method or method the winner takes all (WTA). The experimental results confirmed that fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an images of different dimensions (a reference array) and fragments with diferent dimensions for clustering is carried out. The experiments, using the software environment Mathcad showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. The experimental results show that such models can be successfully used for auto- and hetero-associative recognition. Also they can be used to explain some mechanisms, known as "the reinforcementinhibition concept". Also we demonstrate a real model experiments, which confirm that the nonlinear processing by equivalent function allow to determine the neuron-winners and customize the weight matrix. At the end of the report, we will show how to use the obtained results and to propose new more efficient hardware architecture of SL_EC_RMNS based on matrix-tensor multipliers. Also we estimate the parameters and performance of such architectures.

  6. Pattern recognition and image processing for environmental monitoring

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khalid J.; Eastwood, DeLyle

    1999-12-01

    Pattern recognition (PR) and signal/image processing methods are among the most powerful tools currently available for noninvasively examining spectroscopic and other chemical data for environmental monitoring. Using spectral data, these systems have found a variety of applications employing analytical techniques for chemometrics such as gas chromatography, fluorescence spectroscopy, etc. An advantage of PR approaches is that they make no a prior assumption regarding the structure of the patterns. However, a majority of these systems rely on human judgment for parameter selection and classification. A PR problem is considered as a composite of four subproblems: pattern acquisition, feature extraction, feature selection, and pattern classification. One of the basic issues in PR approaches is to determine and measure the features useful for successful classification. Selection of features that contain the most discriminatory information is important because the cost of pattern classification is directly related to the number of features used in the decision rules. The state of the spectral techniques as applied to environmental monitoring is reviewed. A spectral pattern classification system combining the above components and automatic decision-theoretic approaches for classification is developed. It is shown how such a system can be used for analysis of large data sets, warehousing, and interpretation. In a preliminary test, the classifier was used to classify synchronous UV-vis fluorescence spectra of relatively similar petroleum oils with reasonable success.

  7. Salient man-made structure detection in infrared images

    NASA Astrophysics Data System (ADS)

    Li, Dong-jie; Zhou, Fu-gen; Jin, Ting

    2013-09-01

    Target detection, segmentation and recognition is a hot research topic in the field of image processing and pattern recognition nowadays, among which salient area or object detection is one of core technologies of precision guided weapon. Many theories have been raised in this paper; we detect salient objects in a series of input infrared images by using the classical feature integration theory and Itti's visual attention system. In order to find the salient object in an image accurately, we present a new method to solve the edge blur problem by calculating and using the edge mask. We also greatly improve the computing speed by improving the center-surround differences method. Unlike the traditional algorithm, we calculate the center-surround differences through rows and columns separately. Experimental results show that our method is effective in detecting salient object accurately and rapidly.

  8. Finger-Vein Image Enhancement Using a Fuzzy-Based Fusion Method with Gabor and Retinex Filtering

    PubMed Central

    Shin, Kwang Yong; Park, Young Ho; Nguyen, Dat Tien; Park, Kang Ryoung

    2014-01-01

    Because of the advantages of finger-vein recognition systems such as live detection and usage as bio-cryptography systems, they can be used to authenticate individual people. However, images of finger-vein patterns are typically unclear because of light scattering by the skin, optical blurring, and motion blurring, which can degrade the performance of finger-vein recognition systems. In response to these issues, a new enhancement method for finger-vein images is proposed. Our method is novel compared with previous approaches in four respects. First, the local and global features of the vein lines of an input image are amplified using Gabor filters in four directions and Retinex filtering, respectively. Second, the means and standard deviations in the local windows of the images produced after Gabor and Retinex filtering are used as inputs for the fuzzy rule and fuzzy membership function, respectively. Third, the optimal weights required to combine the two Gabor and Retinex filtered images are determined using a defuzzification method. Fourth, the use of a fuzzy-based method means that image enhancement does not require additional training data to determine the optimal weights. Experimental results using two finger-vein databases showed that the proposed method enhanced the accuracy of finger-vein recognition compared with previous methods. PMID:24549251

  9. Branch length similarity entropy-based descriptors for shape representation

    NASA Astrophysics Data System (ADS)

    Kwon, Ohsung; Lee, Sang-Hee

    2017-11-01

    In previous studies, we showed that the branch length similarity (BLS) entropy profile could be successfully used for the shape recognition such as battle tanks, facial expressions, and butterflies. In the present study, we proposed new descriptors, roundness, symmetry, and surface roughness, for the recognition, which are more accurate and fast in the computation than the previous descriptors. The roundness represents how closely a shape resembles to a circle, the symmetry characterizes how much one shape is similar with another when the shape is moved in flip, and the surface roughness quantifies the degree of vertical deviations of a shape boundary. To evaluate the performance of the descriptors, we used the database of leaf images with 12 species. Each species consisted of 10 - 20 leaf images and the total number of images were 160. The evaluation showed that the new descriptors successfully discriminated the leaf species. We believe that the descriptors can be a useful tool in the field of pattern recognition.

  10. Target Recognition Using Neural Networks for Model Deformation Measurements

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.; Hibler, David L.

    1999-01-01

    Optical measurements provide a non-invasive method for measuring deformation of wind tunnel models. Model deformation systems use targets mounted or painted on the surface of the model to identify known positions, and photogrammetric methods are used to calculate 3-D positions of the targets on the model from digital 2-D images. Under ideal conditions, the reflective targets are placed against a dark background and provide high-contrast images, aiding in target recognition. However, glints of light reflecting from the model surface, or reduced contrast caused by light source or model smoothness constraints, can compromise accurate target determination using current algorithmic methods. This paper describes a technique using a neural network and image processing technologies which increases the reliability of target recognition systems. Unlike algorithmic methods, the neural network can be trained to identify the characteristic patterns that distinguish targets from other objects of similar size and appearance and can adapt to changes in lighting and environmental conditions.

  11. Local gradient Gabor pattern (LGGP) with applications in face recognition, cross-spectral matching, and soft biometrics

    NASA Astrophysics Data System (ADS)

    Chen, Cunjian; Ross, Arun

    2013-05-01

    Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.

  12. Dissociating neural markers of stimulus memorability and subjective recognition during episodic retrieval.

    PubMed

    Bainbridge, Wilma A; Rissman, Jesse

    2018-06-06

    While much of memory research takes an observer-centric focus looking at participant performance, recent work has pinpointed important item-centric effects on memory, or how intrinsically memorable a given stimulus is. However, little is known about the neural correlates of memorability during memory retrieval, or how such correlates relate to subjective memory behavior. Here, stimuli and blood-oxygen-level dependent data from a prior functional magnetic resonance imaging (fMRI) study were reanalyzed using a memorability-based framework. In that study, sixteen participants studied 200 novel face images and were scanned while making recognition memory judgments on those faces, interspersed with 200 unstudied faces. In the current investigation, memorability scores for those stimuli were obtained through an online crowd-sourced (N = 740) continuous recognition test that measured each image's corrected recognition rate. Representational similarity analyses were conducted across the brain to identify regions wherein neural pattern similarity tracked item-specific effects (stimulus memorability) versus observer-specific effects (individual memory performance). We find two non-overlapping sets of regions, with memorability-related information predominantly represented within ventral and medial temporal regions and memory retrieval outcome-related information within fronto-parietal regions. These memorability-based effects persist regardless of image history, implying that coding of stimulus memorability may be a continuous and automatic perceptual process.

  13. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    PubMed

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  14. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor); Guan, Chun (Inventor)

    2010-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object, such as an anatomical feature. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the anatomical feature; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  15. System and technique for retrieving depth information about a surface by projecting a composite image of modulated light patterns

    NASA Technical Reports Server (NTRS)

    Guan, Chun (Inventor); Hassebrook, Laurence G. (Inventor); Lau, Daniel L. (Inventor)

    2008-01-01

    A technique, associated system and program code, for retrieving depth information about at least one surface of an object. Core features include: projecting a composite image comprising a plurality of modulated structured light patterns, at the object; capturing an image reflected from the surface; and recovering pattern information from the reflected image, for each of the modulated structured light patterns. Pattern information is preferably recovered for each modulated structured light pattern used to create the composite, by performing a demodulation of the reflected image. Reconstruction of the surface can be accomplished by using depth information from the recovered patterns to produce a depth map/mapping thereof. Each signal waveform used for the modulation of a respective structured light pattern, is distinct from each of the other signal waveforms used for the modulation of other structured light patterns of a composite image; these signal waveforms may be selected from suitable types in any combination of distinct signal waveforms, provided the waveforms used are uncorrelated with respect to each other. The depth map/mapping to be utilized in a host of applications, for example: displaying a 3-D view of the object; virtual reality user-interaction interface with a computerized device; face--or other animal feature or inanimate object--recognition and comparison techniques for security or identification purposes; and 3-D video teleconferencing/telecollaboration.

  16. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Technical Reports Server (NTRS)

    Hsu, Ken-Yuh (Editor); Liu, Hua-Kuang (Editor)

    1992-01-01

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  17. Mutual information-based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  18. Optical computing and neural networks; Proceedings of the Meeting, National Chiao Tung Univ., Hsinchu, Taiwan, Dec. 16, 17, 1992

    NASA Astrophysics Data System (ADS)

    Hsu, Ken-Yuh; Liu, Hua-Kuang

    The present conference discusses optical neural networks, photorefractive nonlinear optics, optical pattern recognition, digital and analog processors, and holography and its applications. Attention is given to bifurcating optical information processing, neural structures in digital halftoning, an exemplar-based optical neural net classifier for color pattern recognition, volume storage in photorefractive disks, and microlaser-based compact optical neuroprocessors. Also treated are the optical implementation of a feature-enhanced optical interpattern-associative neural network model and its optical implementation, an optical pattern binary dual-rail logic gate module, a theoretical analysis for holographic associative memories, joint transform correlators, image addition and subtraction via the Talbot effect, and optical wavelet-matched filters. (No individual items are abstracted in this volume)

  19. Imaging in gynaecology: How good are we in identifying endometriomas?

    PubMed Central

    Van Holsbeke, C.; Van Calster, B.; Guerriero, S.; Savelli, L.; Leone, F.; Fischerova, D; Czekierdowski, A.; Fruscio, R.; Veldman, J.; Van de Putte, G.; Testa, A.C.; Bourne, T.; Valentin, L.; Timmerman, D.

    2009-01-01

    Aim: To evaluate the performance of subjective evaluation of ultrasound findings (pattern recognition) to discriminate endometriomas from other types of adnexal masses and to compare the demographic and ultrasound characteristics of the true positive cases with those cases that were presumed to be an endometrioma but proved to have a different histology (false positive cases) and the endometriomas missed by pattern recognition (false negative cases). Methods: All patients in the International Ovarian Tumor Analysis (IOTA ) studies were included for analysis. In the IOTA studies, patients with an adnexal mass that were preoperatively examined by expert sonologists following the same standardized ultrasound protocol were prospectively included in 21 international centres. Sensitivity and specificity to discriminate endometriomas from other types of adnexal masses using pattern recognition were calculated. Ultrasound and some demographic variables of the masses presumed to be an endometrioma were analysed (true positives and false positives) and compared with the variables of the endometriomas missed by pattern recognition (false negatives) as well as the true negatives. Results: IOTA phase 1, 1b and 2 included 3511 patients of which 2560 were benign (73%) and 951 malignant (27%). The dataset included 713 endometriomas. Sensitivity and specificity for pattern recognition were 81% (577/713) and 97% (2723/2798). The true positives were more often unilocular with ground glass echogenicity than the masses in any other category. Among the 75 false positive cases, 66 were benign but 9 were malignant (5 borderline tumours, 1 rare primary invasive tumour and 3 endometrioid adenocarcinomas). The presumed diagnosis suggested by the sonologist in case of a missed endometrioma was mostly functional cyst or cystadenoma. Conclusion: Expert sonologists can quite accurately discriminate endometriomas from other types of adnexal masses, but in this dataset 1% of the masses that were classified as endometrioma by pattern recognition proved to be malignancies. PMID:25478066

  20. Binary Programming Models of Spatial Pattern Recognition: Applications in Remote Sensing Image Analysis

    DTIC Science & Technology

    1991-12-01

    9 2.6.1 Multi-Shape Detection. .. .. .. .. .. .. ...... 9 Page 2.6.2 Line Segment Extraction and Re-Combination.. 9 2.6.3 Planimetric Feature... Extraction ............... 10 2.6.4 Line Segment Extraction From Statistical Texture Analysis .............................. 11 2.6.5 Edge Following as Graph...image after image, could benefit clue to the fact that major spatial characteristics of subregions could be extracted , and minor spatial changes could be

  1. Human Expertise Helps Computer Classify Images

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1991-01-01

    Two-domain method of computational classification of images requires less computation than other methods for computational recognition, matching, or classification of images or patterns. Does not require explicit computational matching of features, and incorporates human expertise without requiring translation of mental processes of classification into language comprehensible to computer. Conceived to "train" computer to analyze photomicrographs of microscope-slide specimens of leucocytes from human peripheral blood to distinguish between specimens from healthy and specimens from traumatized patients.

  2. Application of affinity propagation algorithm based on manifold distance for transformer PD pattern recognition

    NASA Astrophysics Data System (ADS)

    Wei, B. G.; Huo, K. X.; Yao, Z. F.; Lou, J.; Li, X. Y.

    2018-03-01

    It is one of the difficult problems encountered in the research of condition maintenance technology of transformers to recognize partial discharge (PD) pattern. According to the main physical characteristics of PD, three models of oil-paper insulation defects were set up in laboratory to study the PD of transformers, and phase resolved partial discharge (PRPD) was constructed. By using least square method, the grey-scale images of PRPD were constructed and features of each grey-scale image were 28 box dimensions and 28 information dimensions. Affinity propagation algorithm based on manifold distance (AP-MD) for transformers PD pattern recognition was established, and the data of box dimension and information dimension were clustered based on AP-MD. Study shows that clustering result of AP-MD is better than the results of affinity propagation (AP), k-means and fuzzy c-means algorithm (FCM). By choosing different k values of k-nearest neighbor, we find clustering accuracy of AP-MD falls when k value is larger or smaller, and the optimal k value depends on sample size.

  3. Technical issues for the eye image database creation at distance

    NASA Astrophysics Data System (ADS)

    Oropesa Morales, Lester Arturo; Maldonado Cano, Luis Alejandro; Soto Aldaco, Andrea; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Rodríguez Vázquez, Manuel Antonio; Pérez Rosas, Osvaldo Gerardo; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; Ramírez Acosta, Alejandro Álvaro

    2016-09-01

    Biometrics refers to identify people through their physical characteristics or behavior such as fingerprints, face, DNA, hand geometries, retina and iris patterns. Typically, the iris pattern is to acquire in short distance to recognize a person, however, in the past few years is a challenge identify a person by its iris pattern at certain distance in non-cooperative environments. This challenge comprises: 1) high quality iris image, 2) light variation, 3) blur reduction, 4) specular reflections reduction, 5) the distance from the acquisition system to the user, and 6) standardize the iris size and the density pixel of iris texture. The solution of the challenge will add robustness and enhance the iris recognition rates. For this reason, we describe the technical issues that must be considered during iris acquisition. Some of these considerations are the camera sensor, lens, the math analysis of depth of field (DOF) and field of view (FOV) for iris recognition. Finally, based on this issues we present experiment that show the result of captures obtained with our camera at distance and captures obtained with cameras in very short distance.

  4. Learning Rotation-Invariant Local Binary Descriptor.

    PubMed

    Duan, Yueqi; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a rotation-invariant local binary descriptor (RI-LBD) learning method for visual recognition. Compared with hand-crafted local binary descriptors, such as local binary pattern and its variants, which require strong prior knowledge, local binary feature learning methods are more efficient and data-adaptive. Unlike existing learning-based local binary descriptors, such as compact binary face descriptor and simultaneous local binary feature learning and encoding, which are susceptible to rotations, our RI-LBD first categorizes each local patch into a rotational binary pattern (RBP), and then jointly learns the orientation for each pattern and the projection matrix to obtain RI-LBDs. As all the rotation variants of a patch belong to the same RBP, they are rotated into the same orientation and projected into the same binary descriptor. Then, we construct a codebook by a clustering method on the learned binary codes, and obtain a histogram feature for each image as the final representation. In order to exploit higher order statistical information, we extend our RI-LBD to the triple rotation-invariant co-occurrence local binary descriptor (TRICo-LBD) learning method, which learns a triple co-occurrence binary code for each local patch. Extensive experimental results on four different visual recognition tasks, including image patch matching, texture classification, face recognition, and scene classification, show that our RI-LBD and TRICo-LBD outperform most existing local descriptors.

  5. Rotation-invariant image and video description with local binary pattern features.

    PubMed

    Zhao, Guoying; Ahonen, Timo; Matas, Jiří; Pietikäinen, Matti

    2012-04-01

    In this paper, we propose a novel approach to compute rotation-invariant features from histograms of local noninvariant patterns. We apply this approach to both static and dynamic local binary pattern (LBP) descriptors. For static-texture description, we present LBP histogram Fourier (LBP-HF) features, and for dynamic-texture recognition, we present two rotation-invariant descriptors computed from the LBPs from three orthogonal planes (LBP-TOP) features in the spatiotemporal domain. LBP-HF is a novel rotation-invariant image descriptor computed from discrete Fourier transforms of LBP histograms. The approach can be also generalized to embed any uniform features into this framework, and combining the supplementary information, e.g., sign and magnitude components of the LBP, together can improve the description ability. Moreover, two variants of rotation-invariant descriptors are proposed to the LBP-TOP, which is an effective descriptor for dynamic-texture recognition, as shown by its recent success in different application problems, but it is not rotation invariant. In the experiments, it is shown that the LBP-HF and its extensions outperform noninvariant and earlier versions of the rotation-invariant LBP in the rotation-invariant texture classification. In experiments on two dynamic-texture databases with rotations or view variations, the proposed video features can effectively deal with rotation variations of dynamic textures (DTs). They also are robust with respect to changes in viewpoint, outperforming recent methods proposed for view-invariant recognition of DTs.

  6. Connectivity strategies for higher-order neural networks applied to pattern recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1990-01-01

    Different strategies for non-fully connected HONNs (higher-order neural networks) are discussed, showing that by using such strategies an input field of 128 x 128 pixels can be attained while still achieving in-plane rotation and translation-invariant recognition. These techniques allow HONNs to be used with the larger input scenes required for practical pattern-recognition applications. The number of interconnections that must be stored has been reduced by a factor of approximately 200,000 in a T/C case and about 2000 in a Space Shuttle/F-18 case by using regional connectivity. Third-order networks have been simulated using several connection strategies. The method found to work best is regional connectivity. The main advantages of this strategy are the following: (1) it considers features of various scales within the image and thus gets a better sample of what the image looks like; (2) it is invariant to shape-preserving geometric transformations, such as translation and rotation; (3) the connections are predetermined so that no extra computations are necessary during run time; and (4) it does not require any extra storage for recording which connections were formed.

  7. Consistent melanophore spot patterns allow long-term individual recognition of Atlantic salmon Salmo salar.

    PubMed

    Stien, L H; Nilsson, J; Bui, S; Fosseidengen, J E; Kristiansen, T S; Øverli, Ø; Folkedal, O

    2017-12-01

    The present study shows that permanent melanophore spot patterns in Atlantic salmon Salmo salar make it possible to use images of the operculum to keep track of individual fish over extended periods of their life history. Post-smolt S. salar (n = 246) were initially photographed at an average mass of 98 g and again 10 months later after rearing in a sea cage, at an average mass of 3088 g. Spots that were present initially remained and were the most overt (largest) 10 months later, while new and less overt spots had developed. Visual recognition of spot size and position showed that fish with at least four initial spots were relatively easy to identify, while identifying fish with less than four spots could be challenging. An automatic image analysis method was developed and shows potential for fast match processing of large numbers of fish. The current findings promote visual recognition of opercular spots as a welfare-friendly alternative to tagging in experiments involving salmonid fishes. © The Authors. Journal of Fish Biology published by John Wiley & Sons Ltd on behalf of The Fisheries Society of the British Isles.

  8. Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing

    NASA Astrophysics Data System (ADS)

    Tian, Q.; Fainman, Y.; Lee, Sing H.

    1989-02-01

    The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.

  9. Electronic system with memristive synapses for pattern recognition

    PubMed Central

    Park, Sangsu; Chu, Myonglae; Kim, Jongin; Noh, Jinwoo; Jeon, Moongu; Hun Lee, Byoung; Hwang, Hyunsang; Lee, Boreom; Lee, Byung-geun

    2015-01-01

    Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction. PMID:25941950

  10. Intelligent Scene Analysis and Recognition

    DTIC Science & Technology

    2010-03-30

    Database, 1998, pp. 42–51. [9] I. Biederman , Aspects and extension of a theory of human image understanding, Z. Pylyshyn, Ed. Ablex Publishing Corporation...geometry in the visual system,” Biological Cybernetics, vol. 55, no. 6, pp. 367–375, 1987 . [30] W. T. Freeman and E. H. Adelson, “The design and use of...Computer Vision and Pattern Recognition, 2009, pp. 1980– 1987 . [47] M. Leordeanu and M. Hebert, “A spectral technique for correspondence problems using

  11. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Image processing applications: From particle physics to society

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  13. Data management in pattern recognition and image processing systems

    NASA Technical Reports Server (NTRS)

    Zobrist, A. L.; Bryant, N. A.

    1976-01-01

    Data management considerations are important to any system which handles large volumes of data or where the manipulation of data is technically sophisticated. A particular problem is the introduction of image-formatted files into the mainstream of data processing application. This report describes a comprehensive system for the manipulation of image, tabular, and graphical data sets which involve conversions between the various data types. A key characteristic is the use of image processing technology to accomplish data management tasks. Because of this, the term 'image-based information system' has been adopted.

  14. Computer-assisted visual interactive recognition and its prospects of implementation over the Internet

    NASA Astrophysics Data System (ADS)

    Zou, Jie; Gattani, Abhishek

    2005-01-01

    When completely automated systems don't yield acceptable accuracy, many practical pattern recognition systems involve the human either at the beginning (pre-processing) or towards the end (handling rejects). We believe that it may be more useful to involve the human throughout the recognition process rather than just at the beginning or end. We describe a methodology of interactive visual recognition for human-centered low-throughput applications, Computer Assisted Visual InterActive Recognition (CAVIAR), and discuss the prospects of implementing CAVIAR over the Internet. The novelty of CAVIAR is image-based interaction through a domain-specific parameterized geometrical model, which reduces the semantic gap between humans and computers. The user may interact with the computer anytime that she considers its response unsatisfactory. The interaction improves the accuracy of the classification features by improving the fit of the computer-proposed model. The computer makes subsequent use of the parameters of the improved model to refine not only its own statistical model-fitting process, but also its internal classifier. The CAVIAR methodology was applied to implement a flower recognition system. The principal conclusions from the evaluation of the system include: 1) the average recognition time of the CAVIAR system is significantly shorter than that of the unaided human; 2) its accuracy is significantly higher than that of the unaided machine; 3) it can be initialized with as few as one training sample per class and still achieve high accuracy; and 4) it demonstrates a self-learning ability. We have also implemented a Mobile CAVIAR system, where a pocket PC, as a client, connects to a server through wireless communication. The motivation behind a mobile platform for CAVIAR is to apply the methodology in a human-centered pervasive environment, where the user can seamlessly interact with the system for classifying field-data. Deploying CAVIAR to a networked mobile platform poses the challenge of classifying field images and programming under constraints of display size, network bandwidth, processor speed, and memory size. Editing of the computer-proposed model is performed on the handheld while statistical model fitting and classification take place on the server. The possibility that the user can easily take several photos of the object poses an interesting information fusion problem. The advantage of the Internet is that the patterns identified by different users can be pooled together to benefit all peer users. When users identify patterns with CAVIAR in a networked setting, they also collect training samples and provide opportunities for machine learning from their intervention. CAVIAR implemented over the Internet provides a perfect test bed for, and extends, the concept of Open Mind Initiative proposed by David Stork. Our experimental evaluation focuses on human time, machine and human accuracy, and machine learning. We devoted much effort to evaluating the use of our image-based user interface and on developing principles for the evaluation of interactive pattern recognition system. The Internet architecture and Mobile CAVIAR methodology have many applications. We are exploring in the directions of teledermatology, face recognition, and education.

  15. Window-based method for approximating the Hausdorff in three-dimensional range imagery

    DOEpatents

    Koch, Mark W [Albuquerque, NM

    2009-06-02

    One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.

  16. Digital Images and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    Processing of digital images destined for visual consumption raises many interesting questions regarding human visual sensitivity. This talk will survey some of these questions, including some that have been answered and some that have not. There will be an emphasis upon visual masking, and a distinction will be drawn between masking due to contrast gain control processes, and due to processes such as hypothesis testing, pattern recognition, and visual search.

  17. Remote sensing of agricultural crops and soils

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator)

    1982-01-01

    Research results and accomplishments of sixteen tasks in the following areas are described: (1) corn and soybean scene radiation research; (2) soil moisture research; (3) sampling and aggregation research; (4) pattern recognition and image registration research; and (5) computer and data base services.

  18. QWT: Retrospective and New Applications

    NASA Astrophysics Data System (ADS)

    Xu, Yi; Yang, Xiaokang; Song, Li; Traversoni, Leonardo; Lu, Wei

    Quaternion wavelet transform (QWT) achieves much attention in recent years as a new image analysis tool. In most cases, it is an extension of the real wavelet transform and complex wavelet transform (CWT) by using the quaternion algebra and the 2D Hilbert transform of filter theory, where analytic signal representation is desirable to retrieve phase-magnitude description of intrinsically 2D geometric structures in a grayscale image. In the context of color image processing, however, it is adapted to analyze the image pattern and color information as a whole unit by mapping sequential color pixels to a quaternion-valued vector signal. This paper provides a retrospective of QWT and investigates its potential use in the domain of image registration, image fusion, and color image recognition. It is indicated that it is important for QWT to induce the mechanism of adaptive scale representation of geometric features, which is further clarified through two application instances of uncalibrated stereo matching and optical flow estimation. Moreover, quaternionic phase congruency model is defined based on analytic signal representation so as to operate as an invariant feature detector for image registration. To achieve better localization of edges and textures in image fusion task, we incorporate directional filter bank (DFB) into the quaternion wavelet decomposition scheme to greatly enhance the direction selectivity and anisotropy of QWT. Finally, the strong potential use of QWT in color image recognition is materialized in a chromatic face recognition system by establishing invariant color features. Extensive experimental results are presented to highlight the exciting properties of QWT.

  19. QR-on-a-chip: a computer-recognizable micro-pattern engraved microfluidic device for high-throughput image acquisition.

    PubMed

    Yun, Kyungwon; Lee, Hyunjae; Bang, Hyunwoo; Jeon, Noo Li

    2016-02-21

    This study proposes a novel way to achieve high-throughput image acquisition based on a computer-recognizable micro-pattern implemented on a microfluidic device. We integrated the QR code, a two-dimensional barcode system, onto the microfluidic device to simplify imaging of multiple ROIs (regions of interest). A standard QR code pattern was modified to arrays of cylindrical structures of polydimethylsiloxane (PDMS). Utilizing the recognition of the micro-pattern, the proposed system enables: (1) device identification, which allows referencing additional information of the device, such as device imaging sequences or the ROIs and (2) composing a coordinate system for an arbitrarily located microfluidic device with respect to the stage. Based on these functionalities, the proposed method performs one-step high-throughput imaging for data acquisition in microfluidic devices without further manual exploration and locating of the desired ROIs. In our experience, the proposed method significantly reduced the time for the preparation of an acquisition. We expect that the method will innovatively improve the prototype device data acquisition and analysis.

  20. Optical Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Yu, Francis T. S.; Jutamulia, Suganda

    2008-10-01

    Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.

  1. Morphological self-organizing feature map neural network with applications to automatic target recognition

    NASA Astrophysics Data System (ADS)

    Zhang, Shijun; Jing, Zhongliang; Li, Jianxun

    2005-01-01

    The rotation invariant feature of the target is obtained using the multi-direction feature extraction property of the steerable filter. Combining the morphological operation top-hat transform with the self-organizing feature map neural network, the adaptive topological region is selected. Using the erosion operation, the topological region shrinkage is achieved. The steerable filter based morphological self-organizing feature map neural network is applied to automatic target recognition of binary standard patterns and real-world infrared sequence images. Compared with Hamming network and morphological shared-weight networks respectively, the higher recognition correct rate, robust adaptability, quick training, and better generalization of the proposed method are achieved.

  2. Pen-chant: Acoustic emissions of handwriting and drawing

    NASA Astrophysics Data System (ADS)

    Seniuk, Andrew G.

    The sounds generated by a writing instrument ('pen-chant') provide a rich and underutilized source of information for pattern recognition. We examine the feasibility of recognition of handwritten cursive text, exclusively through an analysis of acoustic emissions. We design and implement a family of recognizers using a template matching approach, with templates and similarity measures derived variously from: smoothed amplitude signal with fixed resolution, discrete sequence of magnitudes obtained from peaks in the smoothed amplitude signal, and ordered tree obtained from a scale space signal representation. Test results are presented for recognition of isolated lowercase cursive characters and for whole words. We also present qualitative results for recognizing gestures such as circling, scratch-out, check-marks, and hatching. Our first set of results, using samples provided by the author, yield recognition rates of over 70% (alphabet) and 90% (26 words), with a confidence of +/-8%, based solely on acoustic emissions. Our second set of results uses data gathered from nine writers. These results demonstrate that acoustic emissions are a rich source of information, usable---on their own or in conjunction with image-based features---to solve pattern recognition problems. In future work, this approach can be applied to writer identification, handwriting and gesture-based computer input technology, emotion recognition, and temporal analysis of sketches.

  3. Pattern-Recognition System for Approaching a Known Target

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang

    2008-01-01

    A closed-loop pattern-recognition system is designed to provide guidance for maneuvering a small exploratory robotic vehicle (rover) on Mars to return to a landed spacecraft to deliver soil and rock samples that the spacecraft would subsequently bring back to Earth. The system could be adapted to terrestrial use in guiding mobile robots to approach known structures that humans could not approach safely, for such purposes as reconnaissance in military or law-enforcement applications, terrestrial scientific exploration, and removal of explosive or other hazardous items. The system has been demonstrated in experiments in which the Field Integrated Design and Operations (FIDO) rover (a prototype Mars rover equipped with a video camera for guidance) is made to return to a mockup of Mars-lander spacecraft. The FIDO rover camera autonomously acquires an image of the lander from a distance of 125 m in an outdoor environment. Then under guidance by an algorithm that performs fusion of multiple line and texture features in digitized images acquired by the camera, the rover traverses the intervening terrain, using features derived from images of the lander truss structure. Then by use of precise pattern matching for determining the position and orientation of the rover relative to the lander, the rover aligns itself with the bottom of ramps extending from the lander, in preparation for climbing the ramps to deliver samples to the lander. The most innovative aspect of the system is a set of pattern-recognition algorithms that govern a three-phase visual-guidance sequence for approaching the lander. During the first phase, a multifeature fusion algorithm integrates the outputs of a horizontal-line-detection algorithm and a wavelet-transform-based visual-area-of-interest algorithm for detecting the lander from a significant distance. The horizontal-line-detection algorithm is used to determine candidate lander locations based on detection of a horizontal deck that is part of the lander.

  4. Image Analysis and Modeling

    DTIC Science & Technology

    1975-05-01

    place "subgraphs" with more complicated subgraphs. There have beer many re- sults reported which extend string-grammar theorems to web grammar...W. Bacus and E. E. Gose , "Leukocyte Pattern Recognition," IEEE SMC-2. No. 2, pp. 513-536, September 1972. [4] J. K. Hawkins

  5. A Monitoring System for Laying Hens That Uses a Detection Sensor Based on Infrared Technology and Image Pattern Recognition.

    PubMed

    Zaninelli, Mauro; Redaelli, Veronica; Luzi, Fabio; Bontempo, Valentino; Dell'Orto, Vittorio; Savoini, Giovanni

    2017-05-24

    In Italy, organic egg production farms use free-range housing systems with a big outdoor area and a flock of no more than 500 hens. With additional devices and/or farming procedures, the whole flock could be forced to stay in the outdoor area for a limited time of the day. As a consequence, ozone treatments of housing areas could be performed in order to reduce the levels of atmospheric ammonia and bacterial load without risks, due by its toxicity, both for hens and workers. However, an automatic monitoring system, and a sensor able to detect the presence of animals, would be necessary. For this purpose, a first sensor was developed but some limits, related to the time necessary to detect a hen, were observed. In this study, significant improvements, for this sensor, are proposed. They were reached by an image pattern recognition technique that was applied to thermografic images acquired from the housing system. An experimental group of seven laying hens was selected for the tests, carried out for three weeks. The first week was used to set-up the sensor. Different templates, to use for the pattern recognition, were studied and different floor temperature shifts were investigated. At the end of these evaluations, a template of elliptical shape, and sizes of 135 × 63 pixels, was chosen. Furthermore, a temperature shift of one degree was selected to calculate, for each image, a color background threshold to apply in the following field tests. Obtained results showed an improvement of the sensor detection accuracy that reached values of sensitivity and specificity of 95.1% and 98.7%. In addition, the range of time necessary to detect a hen, or classify a case, was reduced at two seconds. This result could allow the sensor to control a bigger area of the housing system. Thus, the resulting monitoring system could allow to perform the sanitary treatments without risks both for animals and humans.

  6. A Monitoring System for Laying Hens That Uses a Detection Sensor Based on Infrared Technology and Image Pattern Recognition

    PubMed Central

    Zaninelli, Mauro; Redaelli, Veronica; Luzi, Fabio; Bontempo, Valentino; Dell’Orto, Vittorio; Savoini, Giovanni

    2017-01-01

    In Italy, organic egg production farms use free-range housing systems with a big outdoor area and a flock of no more than 500 hens. With additional devices and/or farming procedures, the whole flock could be forced to stay in the outdoor area for a limited time of the day. As a consequence, ozone treatments of housing areas could be performed in order to reduce the levels of atmospheric ammonia and bacterial load without risks, due by its toxicity, both for hens and workers. However, an automatic monitoring system, and a sensor able to detect the presence of animals, would be necessary. For this purpose, a first sensor was developed but some limits, related to the time necessary to detect a hen, were observed. In this study, significant improvements, for this sensor, are proposed. They were reached by an image pattern recognition technique that was applied to thermografic images acquired from the housing system. An experimental group of seven laying hens was selected for the tests, carried out for three weeks. The first week was used to set-up the sensor. Different templates, to use for the pattern recognition, were studied and different floor temperature shifts were investigated. At the end of these evaluations, a template of elliptical shape, and sizes of 135 × 63 pixels, was chosen. Furthermore, a temperature shift of one degree was selected to calculate, for each image, a color background threshold to apply in the following field tests. Obtained results showed an improvement of the sensor detection accuracy that reached values of sensitivity and specificity of 95.1% and 98.7%. In addition, the range of time necessary to detect a hen, or classify a case, was reduced at two seconds. This result could allow the sensor to control a bigger area of the housing system. Thus, the resulting monitoring system could allow to perform the sanitary treatments without risks both for animals and humans. PMID:28538654

  7. On the Application of Pattern Recognition and AI Technique to the Cytoscreening of Vaginal Smears by Computer

    NASA Astrophysics Data System (ADS)

    Bow, Sing T.; Wang, Xia-Fang

    1989-05-01

    In this paper the concepts of pattern recognition, image processing and artificial intelligence are applied to the development of an intelligent cytoscreening system to differentiate the abnormal cytological objects from the normal ones in vaginal smears. To achieve this goal,work listed below are involved: 1. Enhancement of the microscopic images of the smears; 2. Elevation of the qualitative differentiation under the microscope by cytologists to a quantitative differentiation plateau on the epithelial cells, ciliated cells, vacuolated cells, foreign-body-giant cells, plasma cells, lymph cells, white blood cells, red blood cells, etc. These knowledges are to be inputted into our intelligent cyto-screening system to ameliorate machine differentiation; 3. Selection of a set of effective features to characterize the cytological objects onto various regions of the multiclustered by computer algorithms; and 4. Systematical summarization of the knowledge that a gynecologist has and the way he/she follows when dealing with a case.

  8. Targeted and untargeted-metabolite profiling to track the compositional integrity of ginger during processing using digitally-enhanced HPTLC pattern recognition analysis.

    PubMed

    Ibrahim, Reham S; Fathy, Hoda

    2018-03-30

    Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.

  9. Spatial-frequency cutoff requirements for pattern recognition in central and peripheral vision

    PubMed Central

    Kwon, MiYoung; Legge, Gordon E.

    2011-01-01

    It is well known that object recognition requires spatial frequencies exceeding some critical cutoff value. People with central scotomas who rely on peripheral vision have substantial difficulty with reading and face recognition. Deficiencies of pattern recognition in peripheral vision, might result in higher cutoff requirements, and may contribute to the functional problems of people with central-field loss. Here we asked about differences in spatial-cutoff requirements in central and peripheral vision for letter and face recognition. The stimuli were the 26 letters of the English alphabet and 26 celebrity faces. Each image was blurred using a low-pass filter in the spatial frequency domain. Critical cutoffs (defined as the minimum low-pass filter cutoff yielding 80% accuracy) were obtained by measuring recognition accuracy as a function of cutoff (in cycles per object). Our data showed that critical cutoffs increased from central to peripheral vision by 20% for letter recognition and by 50% for face recognition. We asked whether these differences could be accounted for by central/peripheral differences in the contrast sensitivity function (CSF). We addressed this question by implementing an ideal-observer model which incorporates empirical CSF measurements and tested the model on letter and face recognition. The success of the model indicates that central/peripheral differences in the cutoff requirements for letter and face recognition can be accounted for by the information content of the stimulus limited by the shape of the human CSF, combined with a source of internal noise and followed by an optimal decision rule. PMID:21854800

  10. Robust Optical Recognition of Cursive Pashto Script Using Scale, Rotation and Location Invariant Approach

    PubMed Central

    Ahmad, Riaz; Naz, Saeeda; Afzal, Muhammad Zeshan; Amin, Sayed Hassan; Breuel, Thomas

    2015-01-01

    The presence of a large number of unique shapes called ligatures in cursive languages, along with variations due to scaling, orientation and location provides one of the most challenging pattern recognition problems. Recognition of the large number of ligatures is often a complicated task in oriental languages such as Pashto, Urdu, Persian and Arabic. Research on cursive script recognition often ignores the fact that scaling, orientation, location and font variations are common in printed cursive text. Therefore, these variations are not included in image databases and in experimental evaluations. This research uncovers challenges faced by Arabic cursive script recognition in a holistic framework by considering Pashto as a test case, because Pashto language has larger alphabet set than Arabic, Persian and Urdu. A database containing 8000 images of 1000 unique ligatures having scaling, orientation and location variations is introduced. In this article, a feature space based on scale invariant feature transform (SIFT) along with a segmentation framework has been proposed for overcoming the above mentioned challenges. The experimental results show a significantly improved performance of proposed scheme over traditional feature extraction techniques such as principal component analysis (PCA). PMID:26368566

  11. Optical computing.

    NASA Technical Reports Server (NTRS)

    Stroke, G. W.

    1972-01-01

    Applications of the optical computer include an approach for increasing the sharpness of images obtained from the most powerful electron microscopes and fingerprint/credit card identification. The information-handling capability of the various optical computing processes is very great. Modern synthetic-aperture radars scan upward of 100,000 resolvable elements per second. Fields which have assumed major importance on the basis of optical computing principles are optical image deblurring, coherent side-looking synthetic-aperture radar, and correlative pattern recognition. Some examples of the most dramatic image deblurring results are shown.

  12. Advanced brain aging: relationship with epidemiologic and genetic risk factors, and overlap with Alzheimer disease atrophy patterns.

    PubMed

    Habes, M; Janowitz, D; Erus, G; Toledo, J B; Resnick, S M; Doshi, J; Van der Auwera, S; Wittfeld, K; Hegenscheid, K; Hosten, N; Biffar, R; Homuth, G; Völzke, H; Grabe, H J; Hoffmann, W; Davatzikos, C

    2016-04-05

    We systematically compared structural imaging patterns of advanced brain aging (ABA) in the general-population, herein defined as significant deviation from typical BA to those found in Alzheimer disease (AD). The hypothesis that ABA would show different patterns of structural change compared with those found in AD was tested via advanced pattern analysis methods. In particular, magnetic resonance images of 2705 participants from the Study of Health in Pomerania (aged 20-90 years) were analyzed using an index that captures aging atrophy patterns (Spatial Pattern of Atrophy for Recognition of BA (SPARE-BA)), and an index previously shown to capture atrophy patterns found in clinical AD (Spatial Patterns of Abnormality for Recognition of Early Alzheimer's Disease (SPARE-AD)). We studied the association between these indices and risk factors, including an AD polygenic risk score. Finally, we compared the ABA-associated atrophy with typical AD-like patterns. We observed that SPARE-BA had significant association with: smoking (P<0.05), anti-hypertensive (P<0.05), anti-diabetic drug use (men P<0.05, women P=0.06) and waist circumference for the male cohort (P<0.05), after adjusting for age. Subjects with ABA had spatially extensive gray matter loss in the frontal, parietal and temporal lobes (false-discovery-rate-corrected q<0.001). ABA patterns of atrophy were partially overlapping with, but notably deviating from those typically found in AD. Subjects with ABA had higher SPARE-AD values; largely due to the partial spatial overlap of associated patterns in temporal regions. The AD polygenic risk score was significantly associated with SPARE-AD but not with SPARE-BA. Our findings suggest that ABA is likely characterized by pathophysiologic mechanisms that are distinct from, or only partially overlapping with those of AD.

  13. Line fitting based feature extraction for object recognition

    NASA Astrophysics Data System (ADS)

    Li, Bing

    2014-06-01

    Image feature extraction plays a significant role in image based pattern applications. In this paper, we propose a new approach to generate hierarchical features. This new approach applies line fitting to adaptively divide regions based upon the amount of information and creates line fitting features for each subsequent region. It overcomes the feature wasting drawback of the wavelet based approach and demonstrates high performance in real applications. For gray scale images, we propose a diffusion equation approach to map information-rich pixels (pixels near edges and ridge pixels) into high values, and pixels in homogeneous regions into small values near zero that form energy map images. After the energy map images are generated, we propose a line fitting approach to divide regions recursively and create features for each region simultaneously. This new feature extraction approach is similar to wavelet based hierarchical feature extraction in which high layer features represent global characteristics and low layer features represent local characteristics. However, the new approach uses line fitting to adaptively focus on information-rich regions so that we avoid the feature waste problems of the wavelet approach in homogeneous regions. Finally, the experiments for handwriting word recognition show that the new method provides higher performance than the regular handwriting word recognition approach.

  14. Automated thematic mapping and change detection of ERTS-A images. [farmlands, cities, and mountain identification in Utah, Washington, Arizona, and California

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. A diffraction pattern analysis of MSS images led to the development of spatial signatures for farm land, urban areas and mountains. Four spatial features are employed to describe the spatial characteristics of image cells in the digital data. Three spectral features are combined with the spatial features to form a seven dimensional vector describing each cell. Then, the classification of the feature vectors is accomplished by using the maximum likelihood criterion. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month, but vary substantially between seasons. Three ERTS-1 images from the Phoenix, Arizona area were processed, and recognition rates between 85% and 100% were obtained for the terrain classes of desert, farms, mountains, and urban areas. To eliminate the need for training data, a new clustering algorithm has been developed. Seven ERTS-1 images from four test sites have been processed through the clustering algorithm, and high recognition rates have been achieved for all terrain classes.

  15. Automated thematic mapping and change detection of ERTS-A images. [digital interpretation of Arizona imagery

    NASA Technical Reports Server (NTRS)

    Gramenopoulos, N. (Principal Investigator)

    1973-01-01

    The author has identified the following significant results. For the recognition of terrain types, spatial signatures are developed from the diffraction patterns of small areas of ERTS-1 images. This knowledge is exploited for the measurements of a small number of meaningful spatial features from the digital Fourier transforms of ERTS-1 image cells containing 32 x 32 picture elements. Using these spatial features and a heuristic algorithm, the terrain types in the vicinity of Phoenix, Arizona were recognized by the computer with a high accuracy. Then, the spatial features were combined with spectral features and using the maximum likelihood criterion the recognition accuracy of terrain types increased substantially. It was determined that the recognition accuracy with the maximum likelihood criterion depends on the statistics of the feature vectors. Nonlinear transformations of the feature vectors are required so that the terrain class statistics become approximately Gaussian. It was also determined that for a given geographic area the statistics of the classes remain invariable for a period of a month but vary substantially between seasons.

  16. Deep neural network features for horses identity recognition using multiview horses' face pattern

    NASA Astrophysics Data System (ADS)

    Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.

    2017-03-01

    To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in [5] a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.

  17. Segmenting Images for a Better Diagnosis

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Hierarchical Segmentation (HSEG) software has been adapted by Bartron Medical Imaging, LLC, for use in segmentation feature extraction, pattern recognition, and classification of medical images. Bartron acquired licenses from NASA Goddard Space Flight Center for application of the HSEG concept to medical imaging, from the California Institute of Technology/Jet Propulsion Laboratory to incorporate pattern-matching software, and from Kennedy Space Center for data-mining and edge-detection programs. The Med-Seg[TM] united developed by Bartron provides improved diagnoses for a wide range of medical images, including computed tomography scans, positron emission tomography scans, magnetic resonance imaging, ultrasound, digitized Z-ray, digitized mammography, dental X-ray, soft tissue analysis, and moving object analysis. It also can be used in analysis of soft-tissue slides. Bartron's future plans include the application of HSEG technology to drug development. NASA is advancing it's HSEG software to learn more about the Earth's magnetosphere.

  18. Towards a computer-aided diagnosis system for vocal cord diseases.

    PubMed

    Verikas, A; Gelzinis, A; Bacauskiene, M; Uloza, V

    2006-01-01

    The objective of this work is to investigate a possibility of creating a computer-aided decision support system for an automated analysis of vocal cord images aiming to categorize diseases of vocal cords. The problem is treated as a pattern recognition task. To obtain a concise and informative representation of a vocal cord image, colour, texture, and geometrical features are used. The representation is further analyzed by a pattern classifier categorizing the image into healthy, diffuse, and nodular classes. The approach developed was tested on 785 vocal cord images collected at the Department of Otolaryngology, Kaunas University of Medicine, Lithuania. A correct classification rate of over 87% was obtained when categorizing a set of unseen images into the aforementioned three classes. Bearing in mind the high similarity of the decision classes, the results obtained are rather encouraging and the developed tools could be very helpful for assuring objective analysis of the images of laryngeal diseases.

  19. Representation, Modeling and Recognition of Outdoor Scenes

    DTIC Science & Technology

    1994-04-01

    B. C. Vemuri and R . Malladi . Deformable models: Canonical parameters for surface representation and multiple view integration. In Conference on...or a high disparity gradient. If both L- R and R -L disparity images are made available, then mirror images of this pattern may be sought in the two...et at., 1991, Terzopoulos and Vasilescu, 1991, Vemuri and Malladi , 1991], parameterized surfaces [Stokely and Wu, 1992, Lowe, 1991], local surfaces

  20. Profiling and sorting Mangifera Indica morphology for quality attributes and grade standards using integrated image processing algorithms

    NASA Astrophysics Data System (ADS)

    Balbin, Jessie R.; Fausto, Janette C.; Janabajab, John Michael M.; Malicdem, Daryl James L.; Marcelo, Reginald N.; Santos, Jan Jeffrey Z.

    2017-06-01

    Mango production is highly vital in the Philippines. It is very essential in the food industry as it is being used in markets and restaurants daily. The quality of mangoes can affect the income of a mango farmer, thus incorrect time of harvesting will result to loss of quality mangoes and income. Scientific farming is much needed nowadays together with new gadgets because wastage of mangoes increase annually due to uncouth quality. This research paper focuses on profiling and sorting of Mangifera Indica using image processing techniques and pattern recognition. The image of a mango is captured on a weekly basis from its early stage. In this study, the researchers monitor the growth and color transition of a mango for profiling purposes. Actual dimensions of the mango are determined through image conversion and determination of pixel and RGB values covered through MATLAB. A program is developed to determine the range of the maximum size of a standard ripe mango. Hue, light, saturation (HSL) correction is used in the filtering process to assure the exactness of RGB values of a mango subject. By pattern recognition technique, the program can determine if a mango is standard and ready to be exported.

  1. Optimization of spectral bands for hyperspectral remote sensing of forest vegetation

    NASA Astrophysics Data System (ADS)

    Dmitriev, Egor V.; Kozoderov, Vladimir V.

    2013-10-01

    Optimization principles of accounting for the most informative spectral channels in hyperspectral remote sensing data processing serve to enhance the efficiency of the employed high-productive computers. The problem of pattern recognition of the remotely sensed land surface objects with the accent on the forests is outlined from the point of view of the spectral channels optimization on the processed hyperspectral images. The relevant computational procedures are tested using the images obtained by the produced in Russia hyperspectral camera that was installed on a gyro-stabilized platform to conduct the airborne flight campaigns. The Bayesian classifier is used for the pattern recognition of the forests with different tree species and age. The probabilistically optimal algorithm constructed on the basis of the maximum likelihood principle is described to minimize the probability of misclassification given by this classifier. The classification error is the major category to estimate the accuracy of the applied algorithm by the known holdout cross-validation method. Details of the related techniques are presented. Results are shown of selecting the spectral channels of the camera while processing the images having in mind radiometric distortions that diminish the classification accuracy. The spectral channels are selected of the obtained subclasses extracted from the proposed validation techniques and the confusion matrices are constructed that characterize the age composition of the classified pine species as well as the broad age-class recognition for the pine and birch species with the fully illuminated parts of their crowns.

  2. Frontal sinus recognition for human identification

    NASA Astrophysics Data System (ADS)

    Falguera, Juan Rogelio; Falguera, Fernanda Pereira Sartori; Marana, Aparecido Nilceu

    2008-03-01

    Many methods based on biometrics such as fingerprint, face, iris, and retina have been proposed for person identification. However, for deceased individuals, such biometric measurements are not available. In such cases, parts of the human skeleton can be used for identification, such as dental records, thorax, vertebrae, shoulder, and frontal sinus. It has been established in prior investigations that the radiographic pattern of frontal sinus is highly variable and unique for every individual. This has stimulated the proposition of measurements of the frontal sinus pattern, obtained from x-ray films, for skeletal identification. This paper presents a frontal sinus recognition method for human identification based on Image Foresting Transform and shape context. Experimental results (ERR = 5,82%) have shown the effectiveness of the proposed method.

  3. Influence of encoding instructions and response bias on cross-cultural differences in specific recognition.

    PubMed

    Paige, Laura E; Amado, Selen; Gutchess, Angela H

    2017-10-01

    Prior cross-cultural research has reported cultural variations in memory. One study revealed that Americans remembered images with more perceptual detail than East Asians (Millar et al. in Cult Brain 1(2-4):138-157, 2013). However, in a later study, this expected pattern was not replicated, possibly due to differences in encoding instructions (Paige et al. in Cortex 91:250-261, 2017). The present study sought to examine when cultural variation in memory-related decisions occur and the role of instructions. American and East Asian participants viewed images of objects while making a Purchase decision or an Approach decision and later completed a surprise recognition test. Results revealed Americans had higher hit rates for specific memory, regardless of instruction type, and a less stringent response criterion relative to East Asians. Additionally, a pattern emerged where the Approach decision enhanced hit rates for specific memory relative to the Purchase decision only when administered first; this pattern did not differ across cultures. Results suggest encoding instructions do not magnify cross-cultural differences in memory. Ultimately, cross-cultural differences in response bias, rather than memory sensitivity per se, may account for findings of cultural differences in memory specificity.

  4. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    NASA Astrophysics Data System (ADS)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  5. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  6. Artificial neural network classification using a minimal training set - Comparison to conventional supervised classification

    NASA Technical Reports Server (NTRS)

    Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin

    1990-01-01

    Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.

  7. Differentiation Between Organic and Non-Organic Apples Using Diffraction Grating and Image Processing-A Cost-Effective Approach.

    PubMed

    Jiang, Nanfeng; Song, Weiran; Wang, Hui; Guo, Gongde; Liu, Yuanyuan

    2018-05-23

    As the expectation for higher quality of life increases, consumers have higher demands for quality food. Food authentication is the technical means of ensuring food is what it says it is. A popular approach to food authentication is based on spectroscopy, which has been widely used for identifying and quantifying the chemical components of an object. This approach is non-destructive and effective but expensive. This paper presents a computer vision-based sensor system for food authentication, i.e., differentiating organic from non-organic apples. This sensor system consists of low-cost hardware and pattern recognition software. We use a flashlight to illuminate apples and capture their images through a diffraction grating. These diffraction images are then converted into a data matrix for classification by pattern recognition algorithms, including k -nearest neighbors ( k -NN), support vector machine (SVM) and three partial least squares discriminant analysis (PLS-DA)- based methods. We carry out experiments on a reasonable collection of apple samples and employ a proper pre-processing, resulting in a highest classification accuracy of 94%. Our studies conclude that this sensor system has the potential to provide a viable solution to empower consumers in food authentication.

  8. Rapid detection of malignant bio-species using digital holographic pattern recognition and nano-photonics

    NASA Astrophysics Data System (ADS)

    Sarkisov, Sergey S.; Kukhtareva, Tatiana; Kukhtarev, Nickolai V.; Curley, Michael J.; Edwards, Vernessa; Creer, Marylyn

    2013-03-01

    There is a great need for rapid detection of bio-hazardous species particularly in applications to food safety and biodefense. It has been recently demonstrated that the colonies of various bio-species could be rapidly detected using culture-specific and reproducible patterns generated by scattered non-coherent light. However, the method heavily relies on a digital pattern recognition algorithm, which is rather complex, requires substantial computational power and is prone to ambiguities due to shift, scale, or orientation mismatch between the analyzed pattern and the reference from the library. The improvement could be made, if, in addition to the intensity of the scattered optical wave, its phase would be also simultaneously recorded and used for the digital holographic pattern recognition. In this feasibility study the research team recorded digital Gabor-type (in-line) holograms of colonies of micro-organisms, such as Salmonella with a laser diode as a low-coherence light source and a lensless high-resolution (2.0x2.0 micron pixel pitch) digital image sensor. The colonies were grown in conventional Petri dishes using standard methods. The digitally recorded holograms were used for computational reconstruction of the amplitude and phase information of the optical wave diffracted on the colonies. Besides, the pattern recognition of the colony fragments using the cross-correlation between the digital hologram was also implemented. The colonies of mold fungi Altenaria sp, Rhizophus, sp, and Aspergillus sp have been also generating nano-colloidal silver during their growth in specially prepared matrices. The silver-specific plasmonic optical extinction peak at 410-nm was also used for rapid detection and growth monitoring of the fungi colonies.

  9. Use of Biometrics within Sub-Saharan Refugee Communities

    DTIC Science & Technology

    2013-12-01

    fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity. Biometrics creates and...Biometrics typically comprises fingerprint patterns, iris pattern recognition, and facial recognition as a means of establishing an individual’s identity...authentication because it identifies an individual based on mathematical analysis of the random pattern visible within the iris. Facial recognition is

  10. Creating a meaningful visual perception in blind volunteers by optic nerve stimulation

    NASA Astrophysics Data System (ADS)

    Brelén, M. E.; Duret, F.; Gérard, B.; Delbeke, J.; Veraart, C.

    2005-03-01

    A blind volunteer, suffering from retinitis pigmentosa, has been chronically implanted with an optic nerve visual prosthesis. Vision rehabilitation with this volunteer has concentrated on the development of a stimulation strategy according to which video camera images are converted into stimulation pulses. The aim is to convey as much information as possible about the visual scene within the limits of the device's capabilities. Pattern recognition tasks were used to assess the effectiveness of the stimulation strategy. The results demonstrate how even a relatively basic algorithm can efficiently convey useful information regarding the visual scene. By increasing the number of phosphenes used in the algorithm, better performance is observed but a longer training period is required. After a learning period, the volunteer achieved a pattern recognition score of 85% at 54 s on average per pattern. After nine evaluation sessions, when using a stimulation strategy exploiting all available phosphenes, no saturation effect has yet been observed.

  11. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex.

    PubMed Central

    Malach, R; Reppas, J B; Benson, R R; Kwong, K K; Jiang, H; Kennedy, W A; Ledden, P J; Brady, T J; Rosen, B R; Tootell, R B

    1995-01-01

    The stages of integration leading from local feature analysis to object recognition were explored in human visual cortex by using the technique of functional magnetic resonance imaging. Here we report evidence for object-related activation. Such activation was located at the lateral-posterior aspect of the occipital lobe, just abutting the posterior aspect of the motion-sensitive area MT/V5, in a region termed the lateral occipital complex (LO). LO showed preferential activation to images of objects, compared to a wide range of texture patterns. This activation was not caused by a global difference in the Fourier spatial frequency content of objects versus texture images, since object images produced enhanced LO activation compared to textures matched in power spectra but randomized in phase. The preferential activation to objects also could not be explained by different patterns of eye movements: similar levels of activation were observed when subjects fixated on the objects and when they scanned the objects with their eyes. Additional manipulations such as spatial frequency filtering and a 4-fold change in visual size did not affect LO activation. These results suggest that the enhanced responses to objects were not a manifestation of low-level visual processing. A striking demonstration that activity in LO is uniquely correlated to object detectability was produced by the "Lincoln" illusion, in which blurring of objects digitized into large blocks paradoxically increases their recognizability. Such blurring led to significant enhancement of LO activation. Despite the preferential activation to objects, LO did not seem to be involved in the final, "semantic," stages of the recognition process. Thus, objects varying widely in their recognizability (e.g., famous faces, common objects, and unfamiliar three-dimensional abstract sculptures) activated it to a similar degree. These results are thus evidence for an intermediate link in the chain of processing stages leading to object recognition in human visual cortex. Images Fig. 1 Fig. 2 Fig. 3 PMID:7667258

  12. Rotation-invariant neural pattern recognition system with application to coin recognition.

    PubMed

    Fukumi, M; Omatu, S; Takeda, F; Kosaka, T

    1992-01-01

    In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to distinguish between a 500 yen coin and a 500 won coin. The results show that the approach works well for variable rotation pattern recognition.

  13. Facial Recognition in a Group-Living Cichlid Fish.

    PubMed

    Kohda, Masanori; Jordan, Lyndon Alexander; Hotta, Takashi; Kosaka, Naoya; Karino, Kenji; Tanaka, Hirokazu; Taniyama, Masami; Takeyama, Tomohiro

    2015-01-01

    The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans.

  14. Research of Daily Conversation Transmitting System Based on Mouth Part Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Watanabe, Mutsumi; Nishi, Natsuko

    The authors are developing a vision-based intension transfer technique by recognizing user’s face expressions and movements, to help free and convenient communications with aged or disabled persons who find difficulties in talking, discriminating small character prints and operating keyboards by hands and fingers. In this paper we report a prototype system, where layered daily conversations are successively selected by recognizing the transition in shape of user’s mouth parts using camera image sequences settled in front of the user. Four mouth part patterns are used in the system. A method that automatically recognizes these patterns by analyzing the intensity histogram data around the mouth region is newly developed. The confirmation of a selection on the way is executed by detecting the open and shut movements of mouth through the temporal change in intensity histogram data. The method has been installed in a desktop PC by VC++ programs. Experimental results of mouth shape pattern recognition by twenty-five persons have shown the effectiveness of the method.

  15. Computer-based classification of bacteria species by analysis of their colonies Fresnel diffraction patterns

    NASA Astrophysics Data System (ADS)

    Suchwalko, Agnieszka; Buzalewicz, Igor; Podbielska, Halina

    2012-01-01

    In the presented paper the optical system with converging spherical wave illumination for classification of bacteria species, is proposed. It allows for compression of the observation space, observation of Fresnel patterns, diffraction pattern scaling and low level of optical aberrations, which are not possessed by other optical configurations. Obtained experimental results have shown that colonies of specific bacteria species generate unique diffraction signatures. Analysis of Fresnel diffraction patterns of bacteria colonies can be fast and reliable method for classification and recognition of bacteria species. To determine the unique features of bacteria colonies diffraction patterns the image processing analysis was proposed. Classification can be performed by analyzing the spatial structure of diffraction patterns, which can be characterized by set of concentric rings. The characteristics of such rings depends on the bacteria species. In the paper, the influence of basic features and ring partitioning number on the bacteria classification, is analyzed. It is demonstrated that Fresnel patterns can be used for classification of following species: Salmonella enteritidis, Staplyococcus aureus, Proteus mirabilis and Citrobacter freundii. Image processing is performed by free ImageJ software, for which a special macro with human interaction, was written. LDA classification, CV method, ANOVA and PCA visualizations preceded by image data extraction were conducted using the free software R.

  16. Palm-Vein Classification Based on Principal Orientation Features

    PubMed Central

    Zhou, Yujia; Liu, Yaqin; Feng, Qianjin; Yang, Feng; Huang, Jing; Nie, Yixiao

    2014-01-01

    Personal recognition using palm–vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm–vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm–vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm–vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm–vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm–vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database. PMID:25383715

  17. Recognition of distinctive patterns of gallium-67 distribution in sarcoidosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sulavik, S.B.; Spencer, R.P.; Weed, D.A.

    1990-12-01

    Assessment of gallium-67 ({sup 67}Ga) uptake in the salivary and lacrimal glands and intrathoracic lymph nodes was made in 605 consecutive patients including 65 with sarcoidosis. A distinctive intrathoracic lymph node {sup 67}Ga uptake pattern, resembling the Greek letter lambda, was observed only in sarcoidosis (72%). Symmetrical lacrimal gland and parotid gland {sup 67}Ga uptake (panda appearance) was noted in 79% of sarcoidosis patients. A simultaneous lambda and panda pattern (62%) or a panda appearance with radiographic bilateral, symmetrical, hilar lymphadenopathy (6%) was present only in sarcoidosis patients. The presence of either of these patterns was particularly prevalent in roentgenmore » Stages I (80%) or II (74%). We conclude that simultaneous (a) lambda and panda images, or (b) a panda image with bilateral symmetrical hilar lymphadenopathy on chest X-ray represent distinctive patterns which are highly specific for sarcoidosis, and may obviate the need for invasive diagnostic procedures.« less

  18. Recognition of Similar Shaped Handwritten Marathi Characters Using Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Jane, Archana P.; Pund, Mukesh A.

    2012-03-01

    The growing need have handwritten Marathi character recognition in Indian offices such as passport, railways etc has made it vital area of a research. Similar shape characters are more prone to misclassification. In this paper a novel method is provided to recognize handwritten Marathi characters based on their features extraction and adaptive smoothing technique. Feature selections methods avoid unnecessary patterns in an image whereas adaptive smoothing technique form smooth shape of charecters.Combination of both these approaches leads to the better results. Previous study shows that, no one technique achieves 100% accuracy in handwritten character recognition area. This approach of combining both adaptive smoothing & feature extraction gives better results (approximately 75-100) and expected outcomes.

  19. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    NASA Astrophysics Data System (ADS)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  20. Evaluating Great Lakes bald eagle nesting habitat with Bayesian inference

    Treesearch

    Teryl G. Grubb; William W. Bowerman; Allen J. Bath; John P. Giesy; D. V. Chip Weseloh

    2003-01-01

    Bayesian inference facilitated structured interpretation of a nonreplicated, experience-based survey of potential nesting habitat for bald eagles (Haliaeetus leucocephalus) along the five Great Lakes shorelines. We developed a pattern recognition (PATREC) model of our aerial search image with six habitat attributes: (a) tree cover, (b) proximity and...

  1. Remote sensing. [land use mapping

    NASA Technical Reports Server (NTRS)

    Jinich, A.

    1979-01-01

    Various imaging techniques are outlined for use in mapping, land use, and land management in Mexico. Among the techniques discussed are pattern recognition and photographic processing. The utilization of information from remote sensing devices on satellites are studied. Multispectral band scanners are examined and software, hardware, and other program requirements are surveyed.

  2. Terrain feature recognition for synthetic aperture radar (SAR) imagery employing spatial attributes of targets

    NASA Astrophysics Data System (ADS)

    Iisaka, Joji; Sakurai-Amano, Takako

    1994-08-01

    This paper describes an integrated approach to terrain feature detection and several methods to estimate spatial information from SAR (synthetic aperture radar) imagery. Spatial information of image features as well as spatial association are key elements in terrain feature detection. After applying a small feature preserving despeckling operation, spatial information such as edginess, texture (smoothness), region-likeliness and line-likeness of objects, target sizes, and target shapes were estimated. Then a trapezoid shape fuzzy membership function was assigned to each spatial feature attribute. Fuzzy classification logic was employed to detect terrain features. Terrain features such as urban areas, mountain ridges, lakes and other water bodies as well as vegetated areas were successfully identified from a sub-image of a JERS-1 SAR image. In the course of shape analysis, a quantitative method was developed to classify spatial patterns by expanding a spatial pattern through the use of a series of pattern primitives.

  3. The current and ideal state of anatomic pathology patient safety.

    PubMed

    Raab, Stephen Spencer

    2014-01-01

    An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.

  4. Fast image processing with a microcomputer applied to speckle photography

    NASA Astrophysics Data System (ADS)

    Erbeck, R.

    1985-11-01

    An automated image recognition system is described for speckle photography investigations in fluid dynamics. The system is employed for characterizing the pattern of interference fringes obtained using speckle interferometry. A rotating ground glass serves as a screen on which laser light passing through a specklegraph plate, the flow and a compensation plate (CP) is shone to produce a compensated Young's pattern. The image produced on the ground glass is photographed by a video camera whose signal is digitized and processed through a microcomputer using a 6502 CPU chip. The normalized correlation function of the intensity is calculated in two directions of the recorded pattern to obtain the wavelength and the light deflection angle. The system has a capability of one picture every two seconds. Sample data are provided for a free jet of CO2 issuing into air in both laminar and turbulent form.

  5. Physiology-based face recognition in the thermal infrared spectrum.

    PubMed

    Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike

    2007-04-01

    The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of low permanence over time. More importantly, the results demonstrate the feasibility of the physiological framework in face recognition and open the way for further methodological and experimental research in the area.

  6. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  7. Visual Communications and Image Processing

    NASA Astrophysics Data System (ADS)

    Hsing, T. Russell

    1987-07-01

    This special issue of Optical Engineering is concerned with visual communications and image processing. The increase in communication of visual information over the past several decades has resulted in many new image processing and visual communication systems being put into service. The growth of this field has been rapid in both commercial and military applications. The objective of this special issue is to intermix advent technology in visual communications and image processing with ideas generated from industry, universities, and users through both invited and contributed papers. The 15 papers of this issue are organized into four different categories: image compression and transmission, image enhancement, image analysis and pattern recognition, and image processing in medical applications.

  8. Cataract influence on iris recognition performance

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Czajka, Adam; Maciejewicz, Piotr

    2014-11-01

    This paper presents the experimental study revealing weaker performance of the automatic iris recognition methods for cataract-affected eyes when compared to healthy eyes. There is little research on the topic, mostly incorporating scarce databases that are often deficient in images representing more than one illness. We built our own database, acquiring 1288 eye images of 37 patients of the Medical University of Warsaw. Those images represent several common ocular diseases, such as cataract, along with less ordinary conditions, such as iris pattern alterations derived from illness or eye trauma. Images were captured in near-infrared light (used in biometrics) and for selected cases also in visible light (used in ophthalmological diagnosis). Since cataract is a disorder that is most populated by samples in the database, in this paper we focus solely on this illness. To assess the extent of the performance deterioration we use three iris recognition methodologies (commercial and academic solutions) to calculate genuine match scores for healthy eyes and those influenced by cataract. Results show a significant degradation in iris recognition reliability manifesting by worsening the genuine scores in all three matchers used in this study (12% of genuine score increase for an academic matcher, up to 175% of genuine score increase obtained for an example commercial matcher). This increase in genuine scores affected the final false non-match rate in two matchers. To our best knowledge this is the only study of such kind that employs more than one iris matcher, and analyzes the iris image segmentation as a potential source of decreased reliability

  9. Biomorphic networks: approach to invariant feature extraction and segmentation for ATR

    NASA Astrophysics Data System (ADS)

    Baek, Andrew; Farhat, Nabil H.

    1998-10-01

    Invariant features in two dimensional binary images are extracted in a single layer network of locally coupled spiking (pulsating) model neurons with prescribed synapto-dendritic response. The feature vector for an image is represented as invariant structure in the aggregate histogram of interspike intervals obtained by computing time intervals between successive spikes produced from each neuron over a given period of time and combining such intervals from all neurons in the network into a histogram. Simulation results show that the feature vectors are more pattern-specific and invariant under translation, rotation, and change in scale or intensity than achieved in earlier work. We also describe an application of such networks to segmentation of line (edge-enhanced or silhouette) images. The biomorphic spiking network's capabilities in segmentation and invariant feature extraction may prove to be, when they are combined, valuable in Automated Target Recognition (ATR) and other automated object recognition systems.

  10. Self-aligning and compressed autosophy video databases

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  11. Improving Pattern Recognition and Neural Network Algorithms with Applications to Solar Panel Energy Optimization

    NASA Astrophysics Data System (ADS)

    Zamora Ramos, Ernesto

    Artificial Intelligence is a big part of automation and with today's technological advances, artificial intelligence has taken great strides towards positioning itself as the technology of the future to control, enhance and perfect automation. Computer vision includes pattern recognition and classification and machine learning. Computer vision is at the core of decision making and it is a vast and fruitful branch of artificial intelligence. In this work, we expose novel algorithms and techniques built upon existing technologies to improve pattern recognition and neural network training, initially motivated by a multidisciplinary effort to build a robot that helps maintain and optimize solar panel energy production. Our contributions detail an improved non-linear pre-processing technique to enhance poorly illuminated images based on modifications to the standard histogram equalization for an image. While the original motivation was to improve nocturnal navigation, the results have applications in surveillance, search and rescue, medical imaging enhancing, and many others. We created a vision system for precise camera distance positioning motivated to correctly locate the robot for capture of solar panel images for classification. The classification algorithm marks solar panels as clean or dirty for later processing. Our algorithm extends past image classification and, based on historical and experimental data, it identifies the optimal moment in which to perform maintenance on marked solar panels as to minimize the energy and profit loss. In order to improve upon the classification algorithm, we delved into feedforward neural networks because of their recent advancements, proven universal approximation and classification capabilities, and excellent recognition rates. We explore state-of-the-art neural network training techniques offering pointers and insights, culminating on the implementation of a complete library with support for modern deep learning architectures, multilayer percepterons and convolutional neural networks. Our research with neural networks has encountered a great deal of difficulties regarding hyperparameter estimation for good training convergence rate and accuracy. Most hyperparameters, including architecture, learning rate, regularization, trainable parameters (or weights) initialization, and so on, are chosen via a trial and error process with some educated guesses. However, we developed the first quantitative method to compare weight initialization strategies, a critical hyperparameter choice during training, to estimate among a group of candidate strategies which would make the network converge to the highest classification accuracy faster with high probability. Our method provides a quick, objective measure to compare initialization strategies to select the best possible among them beforehand without having to complete multiple training sessions for each candidate strategy to compare final results.

  12. Embedded Palmprint Recognition System Using OMAP 3530

    PubMed Central

    Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen

    2012-01-01

    We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the ccentral pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance. PMID:22438721

  13. Embedded palmprint recognition system using OMAP 3530.

    PubMed

    Shen, Linlin; Wu, Shipei; Zheng, Songhao; Ji, Zhen

    2012-01-01

    We have proposed in this paper an embedded palmprint recognition system using the dual-core OMAP 3530 platform. An improved algorithm based on palm code was proposed first. In this method, a Gabor wavelet is first convolved with the palmprint image to produce a response image, where local binary patterns are then applied to code the relation among the magnitude of wavelet response at the central pixel with that of its neighbors. The method is fully tested using the public PolyU palmprint database. While palm code achieves only about 89% accuracy, over 96% accuracy is achieved by the proposed G-LBP approach. The proposed algorithm was then deployed to the DSP processor of OMAP 3530 and work together with the ARM processor for feature extraction. When complicated algorithms run on the DSP processor, the ARM processor can focus on image capture, user interface and peripheral control. Integrated with an image sensing module and central processing board, the designed device can achieve accurate and real time performance.

  14. Plastic antibody for the recognition of chemical warfare agent sulphur mustard.

    PubMed

    Boopathi, M; Suryanarayana, M V S; Nigam, Anil Kumar; Pandey, Pratibha; Ganesan, K; Singh, Beer; Sekhar, K

    2006-06-15

    Molecularly imprinted polymers (MIPs) known as plastic antibodies (PAs) represent a new class of materials possessing high selectivity and affinity for the target molecule. Since their discovery, PAs have attracted considerable interest from bio- and chemical laboratories to pharmaceutical institutes. PAs are becoming an important class of synthetic materials mimicking molecular recognition by natural receptors. In addition, they have been utilized as catalysts, sorbents for solid-phase extraction, stationary phase for liquid chromatography and mimics of enzymes. In this paper, first time we report the preparation and characterization of a PA for the recognition of blistering chemical warfare agent sulphur mustard (SM). The SM imprinted PA exhibited more surface area when compared to the control non-imprinted polymer (NIP). In addition, SEM image showed an ordered nano-pattern for the PA of SM that is entirely different from the image of NIP. The imprinting also enhanced SM rebinding ability to the PA when compared to the NIP with an imprinting efficiency (alpha) of 1.3.

  15. Influence of quality of images recorded in far infrared on pattern recognition based on neural networks and Eigenfaces algorithm

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Kobel, Joanna; Podbielska, Halina

    2003-11-01

    This paper discusses the possibility of exploiting of the tennovision registration and artificial neural networks for facial recognition systems. A biometric system that is able to identify people from thermograms is presented. To identify a person we used the Eigenfaces algorithm. For the face detection in the picture the backpropagation neural network was designed. For this purpose thermograms of 10 people in various external conditions were studies. The Eigenfaces algorithm calculated an average face and then the set of characteristic features for each studied person was produced. The neural network has to detect the face in the image before it actually can be identified. We used five hidden layers for that purpose. It was shown that the errors in recognition depend on the feature extraction, for low quality pictures the error was so high as 30%. However, for pictures with a good feature extraction the results of proper identification higher then 90%, were obtained.

  16. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  17. Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features

    NASA Astrophysics Data System (ADS)

    Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng

    Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.

  18. Using artificial intelligence to improve identification of nanofluid gas-liquid two-phase flow pattern in mini-channel

    NASA Astrophysics Data System (ADS)

    Xiao, Jian; Luo, Xiaoping; Feng, Zhenfei; Zhang, Jinxin

    2018-01-01

    This work combines fuzzy logic and a support vector machine (SVM) with a principal component analysis (PCA) to create an artificial-intelligence system that identifies nanofluid gas-liquid two-phase flow states in a vertical mini-channel. Flow-pattern recognition requires finding the operational details of the process and doing computer simulations and image processing can be used to automate the description of flow patterns in nanofluid gas-liquid two-phase flow. This work uses fuzzy logic and a SVM with PCA to improve the accuracy with which the flow pattern of a nanofluid gas-liquid two-phase flow is identified. To acquire images of nanofluid gas-liquid two-phase flow patterns of flow boiling, a high-speed digital camera was used to record four different types of flow-pattern images, namely annular flow, bubbly flow, churn flow, and slug flow. The textural features extracted by processing the images of nanofluid gas-liquid two-phase flow patterns are used as inputs to various identification schemes such as fuzzy logic, SVM, and SVM with PCA to identify the type of flow pattern. The results indicate that the SVM with reduced characteristics of PCA provides the best identification accuracy and requires less calculation time than the other two schemes. The data reported herein should be very useful for the design and operation of industrial applications.

  19. Optical processing for landmark identification

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Luu, T. K.

    1981-01-01

    A study of optical pattern recognition techniques, available components and airborne optical systems for use in landmark identification was conducted. A data base of imagery exhibiting multisensor, seasonal, snow and fog cover, exposure, and other differences was assembled. These were successfully processed in a scaling optical correlator using weighted matched spatial filter synthesis. Distinctive data classes were defined and a description of the data (with considerable input information and content information) emerged from this study. It has considerable merit with regard to the preprocessing needed and the image difference categories advanced. A optical pattern recognition airborne applications was developed, assembled and demontrated. It employed a laser diode light source and holographic optical elements in a new lensless matched spatial filter architecture with greatly reduced size and weight, as well as component positioning toleranced.

  20. Absolute Position Encoders With Vertical Image Binning

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2005-01-01

    Improved optoelectronic patternrecognition encoders that measure rotary and linear 1-dimensional positions at conversion rates (numbers of readings per unit time) exceeding 20 kHz have been invented. Heretofore, optoelectronic pattern-recognition absoluteposition encoders have been limited to conversion rates <15 Hz -- too low for emerging industrial applications in which conversion rates ranging from 1 kHz to as much as 100 kHz are required. The high conversion rates of the improved encoders are made possible, in part, by use of vertically compressible or binnable (as described below) scale patterns in combination with modified readout sequences of the image sensors [charge-coupled devices (CCDs)] used to read the scale patterns. The modified readout sequences and the processing of the images thus read out are amenable to implementation by use of modern, high-speed, ultra-compact microprocessors and digital signal processors or field-programmable gate arrays. This combination of improvements makes it possible to greatly increase conversion rates through substantial reductions in all three components of conversion time: exposure time, image-readout time, and image-processing time.

  1. Spoofing detection on facial images recognition using LBP and GLCM combination

    NASA Astrophysics Data System (ADS)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  2. An artificial intelligence based improved classification of two-phase flow patterns with feature extracted from acquired images.

    PubMed

    Shanthi, C; Pappa, N

    2017-05-01

    Flow pattern recognition is necessary to select design equations for finding operating details of the process and to perform computational simulations. Visual image processing can be used to automate the interpretation of patterns in two-phase flow. In this paper, an attempt has been made to improve the classification accuracy of the flow pattern of gas/ liquid two- phase flow using fuzzy logic and Support Vector Machine (SVM) with Principal Component Analysis (PCA). The videos of six different types of flow patterns namely, annular flow, bubble flow, churn flow, plug flow, slug flow and stratified flow are recorded for a period and converted to 2D images for processing. The textural and shape features extracted using image processing are applied as inputs to various classification schemes namely fuzzy logic, SVM and SVM with PCA in order to identify the type of flow pattern. The results obtained are compared and it is observed that SVM with features reduced using PCA gives the better classification accuracy and computationally less intensive than other two existing schemes. This study results cover industrial application needs including oil and gas and any other gas-liquid two-phase flows. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Toward faster and more accurate star sensors using recursive centroiding and star identification

    NASA Astrophysics Data System (ADS)

    Samaan, Malak Anees

    The objective of this research is to study different novel developed techniques for spacecraft attitude determination methods using star tracker sensors. This dissertation addresses various issues on developing improved star tracker software, presents new approaches for better performance of star trackers, and considers applications to realize high precision attitude estimates. Star-sensors are often included in a spacecraft attitude-system instrument suite, where high accuracy pointing capability is required. Novel methods for image processing, camera parameters ground calibration, autonomous star pattern recognition, and recursive star identification are researched and implemented to achieve high accuracy and a high frame rate star tracker that can be used for many space missions. This dissertation presents the methods and algorithms implemented for the one Field of View 'FOV'Star NavI sensor that was tested aboard the STS-107 mission in spring 2003 and the two fields of view StarNavII sensor for the EO-3 spacecraft scheduled for launch in 2007. The results of this research enable advances in spacecraft attitude determination based upon real time star sensing and pattern recognition. Building upon recent developments in image processing, pattern recognition algorithms, focal plane detectors, electro-optics, and microprocessors, the star tracker concept utilized in this research has the following key objectives for spacecraft of the future: lower cost, lower mass and smaller volume, increased robustness to environment-induced aging and instrument response variations, increased adaptability and autonomy via recursive self-calibration and health-monitoring on-orbit. Many of these attributes are consequences of improved algorithms that are derived in this dissertation.

  4. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  5. Radiomics-based features for pattern recognition of lung cancer histopathology and metastases.

    PubMed

    Ferreira Junior, José Raniery; Koenigkam-Santos, Marcel; Cipriano, Federico Enrique Garcia; Fabro, Alexandre Todorovic; Azevedo-Marques, Paulo Mazzoncini de

    2018-06-01

    lung cancer is the leading cause of cancer-related deaths in the world, and its poor prognosis varies markedly according to tumor staging. Computed tomography (CT) is the imaging modality of choice for lung cancer evaluation, being used for diagnosis and clinical staging. Besides tumor stage, other features, like histopathological subtype, can also add prognostic information. In this work, radiomics-based CT features were used to predict lung cancer histopathology and metastases using machine learning models. local image datasets of confirmed primary malignant pulmonary tumors were retrospectively evaluated for testing and validation. CT images acquired with same protocol were semiautomatically segmented. Tumors were characterized by clinical features and computer attributes of intensity, histogram, texture, shape, and volume. Three machine learning classifiers used up to 100 selected features to perform the analysis. radiomics-based features yielded areas under the receiver operating characteristic curve of 0.89, 0.97, and 0.92 at testing and 0.75, 0.71, and 0.81 at validation for lymph nodal metastasis, distant metastasis, and histopathology pattern recognition, respectively. the radiomics characterization approach presented great potential to be used in a computational model to aid lung cancer histopathological subtype diagnosis as a "virtual biopsy" and metastatic prediction for therapy decision support without the necessity of a whole-body imaging scanning. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  7. Parallel computer vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhr, L.

    1987-01-01

    This book is written by research scientists involved in the development of massively parallel, but hierarchically structured, algorithms, architectures, and programs for image processing, pattern recognition, and computer vision. The book gives an integrated picture of the programs and algorithms that are being developed, and also of the multi-computer hardware architectures for which these systems are designed.

  8. Proceedings of the 1984 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-01-01

    This conference contains papers on artificial intelligence, pattern recognition, and man-machine systems. Topics considered include concurrent minimization, a robot programming system, system modeling and simulation, camera calibration, thermal power plants, image processing, fault diagnosis, knowledge-based systems, power systems, hydroelectric power plants, expert systems, and electrical transients.

  9. Remote sensing and image interpretation

    NASA Technical Reports Server (NTRS)

    Lillesand, T. M.; Kiefer, R. W. (Principal Investigator)

    1979-01-01

    A textbook prepared primarily for use in introductory courses in remote sensing is presented. Topics covered include concepts and foundations of remote sensing; elements of photographic systems; introduction to airphoto interpretation; airphoto interpretation for terrain evaluation; photogrammetry; radiometric characteristics of aerial photographs; aerial thermography; multispectral scanning and spectral pattern recognition; microwave sensing; and remote sensing from space.

  10. A handheld computer-aided diagnosis system and simulated analysis

    NASA Astrophysics Data System (ADS)

    Su, Mingjian; Zhang, Xuejun; Liu, Brent; Su, Kening; Louie, Ryan

    2016-03-01

    This paper describes a Computer Aided Diagnosis (CAD) system based on cellphone and distributed cluster. One of the bottlenecks in building a CAD system for clinical practice is the storage and process of mass pathology samples freely among different devices, and normal pattern matching algorithm on large scale image set is very time consuming. Distributed computation on cluster has demonstrated the ability to relieve this bottleneck. We develop a system enabling the user to compare the mass image to a dataset with feature table by sending datasets to Generic Data Handler Module in Hadoop, where the pattern recognition is undertaken for the detection of skin diseases. A single and combination retrieval algorithm to data pipeline base on Map Reduce framework is used in our system in order to make optimal choice between recognition accuracy and system cost. The profile of lesion area is drawn by doctors manually on the screen, and then uploads this pattern to the server. In our evaluation experiment, an accuracy of 75% diagnosis hit rate is obtained by testing 100 patients with skin illness. Our system has the potential help in building a novel medical image dataset by collecting large amounts of gold standard during medical diagnosis. Once the project is online, the participants are free to join and eventually an abundant sample dataset will soon be gathered enough for learning. These results demonstrate our technology is very promising and expected to be used in clinical practice.

  11. Sparse network-based models for patient classification using fMRI

    PubMed Central

    Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina

    2015-01-01

    Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459

  12. Investigation into diagnostic agreement using automated computer-assisted histopathology pattern recognition image analysis.

    PubMed

    Webster, Joshua D; Michalowski, Aleksandra M; Dwyer, Jennifer E; Corps, Kara N; Wei, Bih-Rong; Juopperi, Tarja; Hoover, Shelley B; Simpson, R Mark

    2012-01-01

    The extent to which histopathology pattern recognition image analysis (PRIA) agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression). Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden <3%). Regression-based 95% limits of agreement indicated substantial agreement for method interchangeability. Repeated measures revealed concordance correlation of >0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1). Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.

  13. Correlation Time of Ocean Ambient Noise Intensity in San Diego Bay and Target Recognition in Acoustic Daylight Images

    NASA Astrophysics Data System (ADS)

    Wadsworth, Adam J.

    A method for passively detecting and imaging underwater targets using ambient noise as the sole source of illumination (named acoustic daylight) was successfully implemented in the form of the Acoustic Daylight Ocean Noise Imaging System (ADONIS). In a series of imaging experiments conducted in San Diego Bay, where the dominant source of high-frequency ambient noise is snapping shrimp, a large quantity of ambient noise intensity data was collected with the ADONIS (Epifanio, 1997). In a subset of the experimental data sets, fluctuations of time-averaged ambient noise intensity exhibited a diurnal pattern consistent with the increase in frequency of shrimp snapping near dawn and dusk. The same subset of experimental data is revisited here and the correlation time is estimated and analysed for sequences of ambient noise data several minutes in length, with the aim of detecting possible periodicities or other trends in the fluctuation of the shrimp-dominated ambient noise field. Using videos formed from sequences of acoustic daylight images along with other experimental information, candidate segments of static-configuration ADONIS raw ambient noise data were isolated. For each segment, the normalized intensity auto-correlation closely resembled the delta function, the auto-correlation of white noise. No intensity fluctuation patterns at timescales smaller than a few minutes were discernible, suggesting that the shrimp do not communicate, synchronise, or exhibit any periodicities in their snapping. Also presented here is a ADONIS-specific target recognition algorithm based on principal component analysis, along with basic experimental results using a database of acoustic daylight images.

  14. Initial observations using a novel "cine" magnetic resonance imaging technique to detect changes in abdominal motion caused by encapsulating peritoneal sclerosis.

    PubMed

    Wright, Benjamin; Summers, Angela; Fenner, John; Gillott, Richard; Hutchinson, Charles E; Spencer, Paul A; Wilkie, Martin; Hurst, Helen; Herrick, Sarah; Brenchley, Paul; Augustine, Titus; Bardhan, Karna D

    2011-01-01

    Encapsulating peritoneal sclerosis (EPS) is an uncommon complication of peritoneal dialysis (PD), with high mortality and morbidity. The peritoneum thickens, dysfunctions, and forms a cocoon that progressively "strangulates" the small intestine, causing malnutrition, ischemia, and infarction. There is as yet no reliable noninvasive means of diagnosis, but recent developments in image analysis of cine magnetic resonance imaging for the recognition of adhesions offers a way forward. We used this protocol before surgery in 3 patients with suspected EPS. Image analysis revealed patterns of abdominal movement that were markedly different from the patterns in healthy volunteers. The volunteers showed marked movement throughout the abdomen; in contrast, movement in EPS patients was restricted to just below the diaphragm. This clear difference provides early "proof of principle" of the approach that we have developed.

  15. Application of optical correlation techniques to particle imaging velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Edwards, Robert V.

    1988-01-01

    Pulsed laser sheet velocimetry yields nonintrusive measurements of velocity vectors across an extended 2-dimensional region of the flow field. The application of optical correlation techniques to the analysis of multiple exposure laser light sheet photographs can reduce and/or simplify the data reduction time and hardware. Here, Matched Spatial Filters (MSF) are used in a pattern recognition system. Usually MSFs are used to identify the assembly line parts. In this application, the MSFs are used to identify the iso-velocity vector contours in the flow. The patterns to be recognized are the recorded particle images in a pulsed laser light sheet photograph. Measurement of the direction of the partical image displacements between exposures yields the velocity vector. The particle image exposure sequence is designed such that the velocity vector direction is determined unambiguously. A global analysis technique is used in comparison to the more common particle tracking algorithms and Young's fringe analysis technique.

  16. Self-amplified optical pattern recognition system

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1994-01-01

    A self amplifying optical pattern recognizer includes a geometric system configuration similar to that of a Vander Lugt holographic matched filter configuration with a photorefractive crystal specifically oriented with respect to the input beams. An extraordinarily polarized, spherically converging object image beam is formed by laser illumination of an input object image and applied through a photorefractive crystal, such as a barium titanite (BaTiO.sub.3) crystal. A volume or thin-film dif ORIGIN OF THE INVENTION The invention described herein was made in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) in which the Contractor has elected to retain title.

  17. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    NASA Astrophysics Data System (ADS)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  18. Pattern recognition with parallel associative memory

    NASA Technical Reports Server (NTRS)

    Toth, Charles K.; Schenk, Toni

    1990-01-01

    An examination is conducted of the feasibility of searching targets in aerial photographs by means of a parallel associative memory (PAM) that is based on the nearest-neighbor algorithm; the Hamming distance is used as a measure of closeness, in order to discriminate patterns. Attention has been given to targets typically used for ground-control points. The method developed sorts out approximate target positions where precise localizations are needed, in the course of the data-acquisition process. The majority of control points in different images were correctly identified.

  19. Automatic Recognition of Road Signs

    NASA Astrophysics Data System (ADS)

    Inoue, Yasuo; Kohashi, Yuuichirou; Ishikawa, Naoto; Nakajima, Masato

    2002-11-01

    The increase in traffic accidents is becoming a serious social problem with the recent rapid traffic increase. In many cases, the driver"s carelessness is the primary factor of traffic accidents, and the driver assistance system is demanded for supporting driver"s safety. In this research, we propose the new method of automatic detection and recognition of road signs by image processing. The purpose of this research is to prevent accidents caused by driver"s carelessness, and call attention to a driver when the driver violates traffic a regulation. In this research, high accuracy and the efficient sign detecting method are realized by removing unnecessary information except for a road sign from an image, and detect a road sign using shape features. At first, the color information that is not used in road signs is removed from an image. Next, edges except for circular and triangle ones are removed to choose sign shape. In the recognition process, normalized cross correlation operation is carried out to the two-dimensional differentiation pattern of a sign, and the accurate and efficient method for detecting the road sign is realized. Moreover, the real-time operation in a software base was realized by holding down calculation cost, maintaining highly precise sign detection and recognition. Specifically, it becomes specifically possible to process by 0.1 sec(s)/frame using a general-purpose PC (CPU: Pentium4 1.7GHz). As a result of in-vehicle experimentation, our system could process on real time and has confirmed that detection and recognition of a sign could be performed correctly.

  20. Local binary pattern texture-based classification of solid masses in ultrasound breast images

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Sehgal, Chandra M.; Udupa, Jayaram K.

    2012-03-01

    Breast cancer is one of the leading causes of cancer mortality among women. Ultrasound examination can be used to assess breast masses, complementarily to mammography. Ultrasound images reveal tissue information in its echoic patterns. Therefore, pattern recognition techniques can facilitate classification of lesions and thereby reduce the number of unnecessary biopsies. Our hypothesis was that image texture features on the boundary of a lesion and its vicinity can be used to classify masses. We have used intensity-independent and rotation-invariant texture features, known as Local Binary Patterns (LBP). The classifier selected was K-nearest neighbors. Our breast ultrasound image database consisted of 100 patient images (50 benign and 50 malignant cases). The determination of whether the mass was benign or malignant was done through biopsy and pathology assessment. The training set consisted of sixty images, randomly chosen from the database of 100 patients. The testing set consisted of forty images to be classified. The results with a multi-fold cross validation of 100 iterations produced a robust evaluation. The highest performance was observed for feature LBP with 24 symmetrically distributed neighbors over a circle of radius 3 (LBP24,3) with an accuracy rate of 81.0%. We also investigated an approach with a score of malignancy assigned to the images in the test set. This approach provided an ROC curve with Az of 0.803. The analysis of texture features over the boundary of solid masses showed promise for malignancy classification in ultrasound breast images.

  1. New technique for real-time distortion-invariant multiobject recognition and classification

    NASA Astrophysics Data System (ADS)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  2. Coded aperture solution for improving the performance of traffic enforcement cameras

    NASA Astrophysics Data System (ADS)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  3. V2S: Voice to Sign Language Translation System for Malaysian Deaf People

    NASA Astrophysics Data System (ADS)

    Mean Foong, Oi; Low, Tang Jung; La, Wai Wan

    The process of learning and understand the sign language may be cumbersome to some, and therefore, this paper proposes a solution to this problem by providing a voice (English Language) to sign language translation system using Speech and Image processing technique. Speech processing which includes Speech Recognition is the study of recognizing the words being spoken, regardless of whom the speaker is. This project uses template-based recognition as the main approach in which the V2S system first needs to be trained with speech pattern based on some generic spectral parameter set. These spectral parameter set will then be stored as template in a database. The system will perform the recognition process through matching the parameter set of the input speech with the stored templates to finally display the sign language in video format. Empirical results show that the system has 80.3% recognition rate.

  4. Analysis of the hand vein pattern for people recognition

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Cristóbal, G.; Marcos, J. Victor; Padilla-Vivanco, A.; Hurtado Pérez, R.

    2015-09-01

    The shape of the hand vascular pattern contains useful and unique features that can be used for identifying and authenticating people, with applications in access control, medicine and financial services. In this work, an optical system for the image acquisition of the hand vascular pattern is implemented. It consists of a CCD camera with sensitivity in the IR and a light source with emission in the 880 nm. The IR radiation interacts with the desoxyhemoglobin, hemoglobin and water present in the blood of the veins, making possible to see the vein pattern underneath skin. The segmentation of the Region Of Interest (ROI) is achieved using geometrical moments locating the centroid of an image. For enhancement of the vein pattern we use the technique of Histogram Equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE). In order to remove unnecessary information such as body hair and skinfolds, a low pass filter is implemented. A method based on geometric moments is used to obtain the invariant descriptors of the input images. The classification task is achieved using Artificial Neural Networks (ANN) and K-Nearest Neighbors (K-nn) algorithms. Experimental results using our database show a percentage of correct classification, higher of 86.36% with ANN for 912 images of 38 people with 12 versions each one.

  5. Different foveal schisis patterns in each retinal layer in eyes with hereditary juvenile retinoschisis evaluated by en-face optical coherence tomography.

    PubMed

    Yoshida-Uemura, Tomoyo; Katagiri, Satoshi; Yokoi, Tadashi; Nishina, Sachiko; Azuma, Noriyuki

    2017-04-01

    To analyze the structures of schisis in eyes with hereditary juvenile retinoschisis using en-face optical coherence tomography (OCT) imaging. In this retrospective observational study, we reviewed the medical records of patients with hereditary juvenile retinoschisis who underwent comprehensive ophthalmic examinations including swept-source OCT. OCT images were obtained from 16 eyes of nine boys (mean age ± standard deviation, 10.6 ± 4.0 years). The horizontal OCT images at the fovea showed inner nuclear layer (INL) schisis in one eye (6.3 %), ganglion cell layer (GCL) and INL schisis in 12 eyes (75.0 %), INL and outer plexiform layer (OPL) schisis in two eyes (12.5 %), and GCL, INL, and OPL schisis in one eye (6.3 %). En-face OCT images showed characteristic schisis patterns in each retinal layer, which were represented by multiple hyporeflective holes in the parafoveal region in the GCL, a spoke-like pattern in the foveal region, a reticular pattern in the parafoveal region in the INL, and multiple hyporeflective polygonal cavities with partitions in the OPL. Our results using en-face OCT imaging clarified different patterns of schisis formation among the GCL, INL, and OPL, which lead to further recognition of structure in hereditary juvenile retinoschisis.

  6. Neural networks: Alternatives to conventional techniques for automatic docking

    NASA Technical Reports Server (NTRS)

    Vinz, Bradley L.

    1994-01-01

    Automatic docking of orbiting spacecraft is a crucial operation involving the identification of vehicle orientation as well as complex approach dynamics. The chaser spacecraft must be able to recognize the target spacecraft within a scene and achieve accurate closing maneuvers. In a video-based system, a target scene must be captured and transformed into a pattern of pixels. Successful recognition lies in the interpretation of this pattern. Due to their powerful pattern recognition capabilities, artificial neural networks offer a potential role in interpretation and automatic docking processes. Neural networks can reduce the computational time required by existing image processing and control software. In addition, neural networks are capable of recognizing and adapting to changes in their dynamic environment, enabling enhanced performance, redundancy, and fault tolerance. Most neural networks are robust to failure, capable of continued operation with a slight degradation in performance after minor failures. This paper discusses the particular automatic docking tasks neural networks can perform as viable alternatives to conventional techniques.

  7. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  8. Error-Correcting Parsing for Syntactic Pattern Recognition

    DTIC Science & Technology

    1977-08-01

    1971. 55. Slromoney, G., Slromoney, R., and K. Krlthlvasan, "Abstract Families of Matrices and Picture Langauges," Computer Graphic and Image...T112 111X1 121 Tine USLO FOR LINXirjG A THtt .186 SEC "INPUT CHARACTER IS A DISTANCE PORN N0*flAL A IS_ 3 TINE USED FOX PARSING S.l&l SEC

  9. Working group organizational meeting

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Scene radiation and atmospheric effects, mathematical pattern recognition and image analysis, information evaluation and utilization, and electromagnetic measurements and signal handling are considered. Research issues in sensors and signals, including radar (SAR) reflectometry, SAR processing speed, registration, including overlay of SAR and optical imagery, entire system radiance calibration, and lack of requirements for both sensors and systems, etc. were discussed.

  10. An overview of computer vision

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An overview of computer vision is provided. Image understanding and scene analysis are emphasized, and pertinent aspects of pattern recognition are treated. The basic approach to computer vision systems, the techniques utilized, applications, the current existing systems and state-of-the-art issues and research requirements, who is doing it and who is funding it, and future trends and expectations are reviewed.

  11. A Study of Hand Back Skin Texture Patterns for Personal Identification and Gender Classification

    PubMed Central

    Xie, Jin; Zhang, Lei; You, Jane; Zhang, David; Qu, Xiaofeng

    2012-01-01

    Human hand back skin texture (HBST) is often consistent for a person and distinctive from person to person. In this paper, we study the HBST pattern recognition problem with applications to personal identification and gender classification. A specially designed system is developed to capture HBST images, and an HBST image database was established, which consists of 1,920 images from 80 persons (160 hands). An efficient texton learning based method is then presented to classify the HBST patterns. First, textons are learned in the space of filter bank responses from a set of training images using the l1 -minimization based sparse representation (SR) technique. Then, under the SR framework, we represent the feature vector at each pixel over the learned dictionary to construct a representation coefficient histogram. Finally, the coefficient histogram is used as skin texture feature for classification. Experiments on personal identification and gender classification are performed by using the established HBST database. The results show that HBST can be used to assist human identification and gender classification. PMID:23012512

  12. Image acquisition system for traffic monitoring applications

    NASA Astrophysics Data System (ADS)

    Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben

    1995-03-01

    An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.

  13. Target recognition for ladar range image using slice image

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Wang, Liang

    2015-12-01

    A shape descriptor and a complete shape-based recognition system using slice images as geometric feature descriptor for ladar range images are introduced. A slice image is a two-dimensional image generated by three-dimensional Hough transform and the corresponding mathematical transformation. The system consists of two processes, the model library construction and recognition. In the model library construction process, a series of range images are obtained after the model object is sampled at preset attitude angles. Then, all the range images are converted into slice images. The number of slice images is reduced by clustering analysis and finding a representation to reduce the size of the model library. In the recognition process, the slice image of the scene is compared with the slice image in the model library. The recognition results depend on the comparison. Simulated ladar range images are used to analyze the recognition and misjudgment rates, and comparison between the slice image representation method and moment invariants representation method is performed. The experimental results show that whether in conditions without noise or with ladar noise, the system has a high recognition rate and low misjudgment rate. The comparison experiment demonstrates that the slice image has better representation ability than moment invariants.

  14. A multistream model of visual word recognition.

    PubMed

    Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie

    2009-02-01

    Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.

  15. Panoramic autofluorescence: highlighting retinal pathology.

    PubMed

    Slotnick, Samantha; Sherman, Jerome

    2012-05-01

    Recent technological advances in fundus autofluorescence (FAF) are providing new opportunities for insight into retinal physiology and pathophysiology. FAF provides distinctly different imaging information than standard photography or color separation. A review of the basis for this imaging technology is included to help the clinician understand how to interpret FAF images. Cases are presented to illustrate image interpretation. Optos, which manufactures equipment for simultaneous panoramic imaging, has recently outfitted several units with AF capabilities. Six cases are presented in which panoramic autofluorescent (PAF) images highlight retinal pathology, using Optos' Ultra-Widefield technology. Supportive imaging technologies, such as Optomap® images and spectral domain optical coherence tomography (SD-OCT), are used to assist in the clinical interpretation of retinal pathology detected on PAF. Hypofluorescent regions on FAF are identified to occur along with a disruption in the photoreceptors and/or retinal pigment epithelium, as borne out on SD-OCT. Hyperfluorescent regions on FAF occur at the advancing zones of retinal degeneration, indicating impending damage. PAF enables such inferences to be made in retinal areas which lie beyond the reach of SD-OCT imaging. PAF also enhances clinical pattern recognition over a large area and in comparison with the fellow eye. Symmetric retinal degenerations often occur with genetic conditions, such as retinitis pigmentosa, and may impel the clinician to recommend genetic testing. Autofluorescent ophthalmoscopy is a non-invasive procedure that can detect changes in metabolic activity at the retinal pigment epithelium before clinical ophthalmoscopy. Already, AF is being used as an adjunct technology to fluorescein angiography in cases of age-related macular degeneration. Both hyper- and hypoautofluorescent changes are indicative of pathology. Peripheral retinal abnormalities may precede central retinal impacts, potentially providing early signs for intervention before impacting visual acuity. The panoramic image enhances clinical pattern recognition over a large area and in comparison between eyes. Optos' Ultra-Widefield technology is capable of capturing high-resolution images of the peripheral retina without requiring dilation.

  16. Correcting geometric and photometric distortion of document images on a smartphone

    NASA Astrophysics Data System (ADS)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  17. Exploring Deep Learning and Sparse Matrix Format Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Y.; Liao, C.; Shen, X.

    We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less

  18. Measurement of three-dimensional velocity profiles using forward scattering particle image velocimetry (FSPIV) and neural net pattern recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovryn, B.; Wright, T.; Khaydarov, J.D.

    1995-12-31

    The authors employ Forward Scattering Particle Image Velocimetry (FSPIV) to measure all three components of the velocity of a buoyant polystyrene particle in oil. Unlike conventional particle image velocimetry (PIV) techniques, FSPIV employs coherent or partially coherent back illumination and collects the forward scattered wavefront; additionally, the field-of-view is microscopic. Using FSPIV, it is possible to easily identify the particle`s centroid and to simultaneously obtain the fluid velocity in different planes perpendicular to the viewing direction without changing the collection or imaging optics. The authors have trained a neural network to identify the scattering pattern as function of displacement alongmore » the optical axis (axial defocus) and determine the transverse velocity by tracking the centroid as function of time. They present preliminary results from Mie theory calculations which include the effect of the imaging system. To their knowledge, this is the first work of this kind; preliminary results are encouraging.« less

  19. Imaging of denervation in the head and neck.

    PubMed

    Borges, Alexandra

    2010-05-01

    Denervation changes maybe the first sign of a cranial nerve injury. Recognition of denervation patterns can be used to determine the site and extent of a lesion and to tailor imaging studies according to the most likely location of an insult along the course of the affected cranial nerve(s). In addition, the extent of denervation can be used to predict functional recovery after treatment. On imaging, signs of denervation can be misleading as they often mimic recurrent neoplasm or inflammatory conditions. Imaging can both depict denervation related changes and establish its cause. This article briefly reviews the anatomy of the extracranial course of motor cranial nerves with particular emphasis on the muscles supplied by each nerve, the imaging features of the various stages of denervation, the different patterns of denervation that maybe helpful in the topographic diagnosis of nerve lesions and the most common causes of cranial nerve injuries leading to denervation. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  20. Skeletonization of gray-scale images by gray weighted distance transform

    NASA Astrophysics Data System (ADS)

    Qian, Kai; Cao, Siqi; Bhattacharya, Prabir

    1997-07-01

    In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.

  1. Multivariate analysis of fMRI time series: classification and regression of brain responses using machine learning.

    PubMed

    Formisano, Elia; De Martino, Federico; Valente, Giancarlo

    2008-09-01

    Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.

  2. Investigation into the visual perceptive ability of anaesthetists during ultrasound-guided interscalene and femoral blocks conducted on soft embalmed cadavers: a randomised single-blind study.

    PubMed

    Mustafa, A; Seeley, J; Munirama, S; Columb, M; McKendrick, M; Schwab, A; Corner, G; Eisma, R; Mcleod, G

    2018-04-01

    Errors may occur during regional anaesthesia whilst searching for nerves, needle tips, and test doses. Poor visual search impacts on decision making, clinical intervention, and patient safety. We conducted a randomised single-blind study in a single university hospital. Twenty trainees and two consultants examined the paired B-mode and fused B-mode and elastography video recordings of 24 interscalene and 24 femoral blocks conducted on two soft embalmed cadavers. Perineural injection was randomised equally to 0.25, 0.5, and 1.0 ml volumes. Tissue displacement perceived on both imaging modalities was defined as 'target' or 'distractor'. Our primary objective was to test the anaesthetists' perception of the number and proportion of targets and distractors on B-mode and fused elastography videos collected during femoral and sciatic nerve block on soft embalmed cadavers. Our secondary objectives were to determine the differences between novices and experts, and between test-dose volumes, and to measure the area and brightness of spread and strain patterns. All anaesthetists recognised perineural spread using 0.25 ml volumes. Distractor patterns were recognised in 133 (12%) of B-mode and in 403 (38%) of fused B-mode and elastography patterns; P<0.001. With elastography, novice recognition improved from 12 to 37% (P<0.001), and consultant recognition increased from 24 to 53%; P<0.001. Distractor recognition improved from 8 to 31% using 0.25 ml volumes (P<0.001), and from 15 to 45% using 1 ml volumes (P<0.001). Visual search improved with fusion elastography, increased volume, and consultants. A need exists to investigate image search strategies. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  3. The effect of mild acute stress during memory consolidation on emotional recognition memory.

    PubMed

    Corbett, Brittany; Weinberg, Lisa; Duarte, Audrey

    2017-11-01

    Stress during consolidation improves recognition memory performance. Generally, this memory benefit is greater for emotionally arousing stimuli than neutral stimuli. The strength of the stressor also plays a role in memory performance, with memory performance improving up to a moderate level of stress and thereafter worsening. As our daily stressors are generally minimal in strength, we chose to induce mild acute stress to determine its effect on memory performance. In the current study, we investigated if mild acute stress during consolidation improves memory performance for emotionally arousing images. To investigate this, we had participants encode highly arousing negative, minimally arousing negative, and neutral images. We induced stress using the Montreal Imaging Stress Task (MIST) in half of the participants and a control task to the other half of the participants directly after encoding (i.e. during consolidation) and tested recognition 48h later. We found no difference in memory performance between the stress and control group. We found a graded pattern among confidence, with responders in the stress group having the least amount of confidence in their hits and controls having the most. Across groups, we found highly arousing negative images were better remembered than minimally arousing negative or neutral images. Although stress did not affect memory accuracy, responders, as defined by cortisol reactivity, were less confident in their decisions. Our results suggest that the daily stressors humans experience, regardless of their emotional affect, do not have adverse effects on memory. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Image dependency in the recognition of newly learnt faces.

    PubMed

    Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily

    2017-05-01

    Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.

  5. Vehicle license plate recognition in dense fog based on improved atmospheric scattering model

    NASA Astrophysics Data System (ADS)

    Tang, Chunming; Lin, Jun; Chen, Chunkai; Dong, Yancheng

    2018-04-01

    An effective method based on improved atmospheric scattering model is proposed in this paper to handle the problem of the vehicle license plate location and recognition in dense fog. Dense fog detection is performed firstly by the top-hat transformation and the vertical edge detection, and the moving vehicle image is separated from the traffic video image. After the vehicle image is decomposed into two layers: structure and texture layers, the glow layer is separated from the structure layer to get the background layer. Followed by performing the mean-pooling and the bicubic interpolation algorithm, the atmospheric light map of the background layer can be predicted, meanwhile the transmission of the background layer is estimated through the grayed glow layer, whose gray value is altered by linear mapping. Then, according to the improved atmospheric scattering model, the final restored image can be obtained by fusing the restored background layer and the optimized texture layer. License plate location is performed secondly by a series of morphological operations, connected domain analysis and various validations. Characters extraction is achieved according to the projection. Finally, an offline trained pattern classifier of hybrid discriminative restricted boltzmann machines (HDRBM) is applied to recognize the characters. Experimental results on thorough data sets are reported to demonstrate that the proposed method can achieve high recognition accuracy and works robustly in the dense fog traffic environment during 24h or one day.

  6. Analysis of digitized cervical images to detect cervical neoplasia

    NASA Astrophysics Data System (ADS)

    Ferris, Daron G.

    2004-05-01

    Cervical cancer is the second most common malignancy in women worldwide. If diagnosed in the premalignant stage, cure is invariably assured. Although the Papanicolaou (Pap) smear has significantly reduced the incidence of cervical cancer where implemented, the test is only moderately sensitive, highly subjective and skilled-labor intensive. Newer optical screening tests (cervicography, direct visual inspection and speculoscopy), including fluorescent and reflective spectroscopy, are fraught with certain weaknesses. Yet, the integration of optical probes for the detection and discrimination of cervical neoplasia with automated image analysis methods may provide an effective screening tool for early detection of cervical cancer, particularly in resource poor nations. Investigative studies are needed to validate the potential for automated classification and recognition algorithms. By applying image analysis techniques for registration, segmentation, pattern recognition, and classification, cervical neoplasia may be reliably discriminated from normal epithelium. The National Cancer Institute (NCI), in cooperation with the National Library of Medicine (NLM), has embarked on a program to begin this and other similar investigative studies.

  7. Efficiency and Flexibility of Fingerprint Scheme Using Partial Encryption and Discrete Wavelet Transform to Verify User in Cloud Computing.

    PubMed

    Yassin, Ali A

    2014-01-01

    Now, the security of digital images is considered more and more essential and fingerprint plays the main role in the world of image. Furthermore, fingerprint recognition is a scheme of biometric verification that applies pattern recognition techniques depending on image of fingerprint individually. In the cloud environment, an adversary has the ability to intercept information and must be secured from eavesdroppers. Unluckily, encryption and decryption functions are slow and they are often hard. Fingerprint techniques required extra hardware and software; it is masqueraded by artificial gummy fingers (spoof attacks). Additionally, when a large number of users are being verified at the same time, the mechanism will become slow. In this paper, we employed each of the partial encryptions of user's fingerprint and discrete wavelet transform to obtain a new scheme of fingerprint verification. Moreover, our proposed scheme can overcome those problems; it does not require cost, reduces the computational supplies for huge volumes of fingerprint images, and resists well-known attacks. In addition, experimental results illustrate that our proposed scheme has a good performance of user's fingerprint verification.

  8. Efficiency and Flexibility of Fingerprint Scheme Using Partial Encryption and Discrete Wavelet Transform to Verify User in Cloud Computing

    PubMed Central

    Yassin, Ali A.

    2014-01-01

    Now, the security of digital images is considered more and more essential and fingerprint plays the main role in the world of image. Furthermore, fingerprint recognition is a scheme of biometric verification that applies pattern recognition techniques depending on image of fingerprint individually. In the cloud environment, an adversary has the ability to intercept information and must be secured from eavesdroppers. Unluckily, encryption and decryption functions are slow and they are often hard. Fingerprint techniques required extra hardware and software; it is masqueraded by artificial gummy fingers (spoof attacks). Additionally, when a large number of users are being verified at the same time, the mechanism will become slow. In this paper, we employed each of the partial encryptions of user's fingerprint and discrete wavelet transform to obtain a new scheme of fingerprint verification. Moreover, our proposed scheme can overcome those problems; it does not require cost, reduces the computational supplies for huge volumes of fingerprint images, and resists well-known attacks. In addition, experimental results illustrate that our proposed scheme has a good performance of user's fingerprint verification. PMID:27355051

  9. Luggage and shipped goods.

    PubMed

    Vogel, H; Haller, D

    2007-08-01

    Control of luggage and shipped goods are frequently carried out. The possibilities of X-ray technology shall be demonstrated. There are different imaging techniques. The main concepts are transmission imaging, backscatter imaging, computed tomography, and dual energy imaging and the combination of different methods The images come from manufacturers and personal collections. The search concerns mainly, weapons, explosives, and drugs; furthermore animals, and stolen goods, Special problems offer the control of letters and the detection of Improvised Explosive Devices (IED). One has to expect that controls will increase and that imaging with X-rays will have their part. Pattern recognition software will be used for analysis enforced by economy and by demand for higher efficiency - man and computer will produce more security than man alone.

  10. A study of image quality for radar image processing. [synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    King, R. W.; Kaupp, V. H.; Waite, W. P.; Macdonald, H. C.

    1982-01-01

    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics.

  11. Method for secure electronic voting system: face recognition based approach

    NASA Astrophysics Data System (ADS)

    Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran

    2017-06-01

    In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.

  12. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    PubMed

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  13. Hierarchical content-based image retrieval by dynamic indexing and guided search

    NASA Astrophysics Data System (ADS)

    You, Jane; Cheung, King H.; Liu, James; Guo, Linong

    2003-12-01

    This paper presents a new approach to content-based image retrieval by using dynamic indexing and guided search in a hierarchical structure, and extending data mining and data warehousing techniques. The proposed algorithms include: a wavelet-based scheme for multiple image feature extraction, the extension of a conventional data warehouse and an image database to an image data warehouse for dynamic image indexing, an image data schema for hierarchical image representation and dynamic image indexing, a statistically based feature selection scheme to achieve flexible similarity measures, and a feature component code to facilitate query processing and guide the search for the best matching. A series of case studies are reported, which include a wavelet-based image color hierarchy, classification of satellite images, tropical cyclone pattern recognition, and personal identification using multi-level palmprint and face features.

  14. Advanced methods in NDE using machine learning approaches

    NASA Astrophysics Data System (ADS)

    Wunderlich, Christian; Tschöpe, Constanze; Duckhorn, Frank

    2018-04-01

    Machine learning (ML) methods and algorithms have been applied recently with great success in quality control and predictive maintenance. Its goal to build new and/or leverage existing algorithms to learn from training data and give accurate predictions, or to find patterns, particularly with new and unseen similar data, fits perfectly to Non-Destructive Evaluation. The advantages of ML in NDE are obvious in such tasks as pattern recognition in acoustic signals or automated processing of images from X-ray, Ultrasonics or optical methods. Fraunhofer IKTS is using machine learning algorithms in acoustic signal analysis. The approach had been applied to such a variety of tasks in quality assessment. The principal approach is based on acoustic signal processing with a primary and secondary analysis step followed by a cognitive system to create model data. Already in the second analysis steps unsupervised learning algorithms as principal component analysis are used to simplify data structures. In the cognitive part of the software further unsupervised and supervised learning algorithms will be trained. Later the sensor signals from unknown samples can be recognized and classified automatically by the algorithms trained before. Recently the IKTS team was able to transfer the software for signal processing and pattern recognition to a small printed circuit board (PCB). Still, algorithms will be trained on an ordinary PC; however, trained algorithms run on the Digital Signal Processor and the FPGA chip. The identical approach will be used for pattern recognition in image analysis of OCT pictures. Some key requirements have to be fulfilled, however. A sufficiently large set of training data, a high signal-to-noise ratio, and an optimized and exact fixation of components are required. The automated testing can be done subsequently by the machine. By integrating the test data of many components along the value chain further optimization including lifetime and durability prediction based on big data becomes possible, even if components are used in different versions or configurations. This is the promise behind German Industry 4.0.

  15. Scene text recognition in mobile applications by character descriptor and structure configuration.

    PubMed

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  16. Melanoma recognition framework based on expert definition of ABCD for dermoscopic images.

    PubMed

    Abbas, Qaisar; Emre Celebi, M; Garcia, Irene Fondón; Ahmad, Waqar

    2013-02-01

    Melanoma Recognition based on clinical ABCD rule is widely used for clinical diagnosis of pigmented skin lesions in dermoscopy images. However, the current computer-aided diagnostic (CAD) systems for classification between malignant and nevus lesions using the ABCD criteria are imperfect due to use of ineffective computerized techniques. In this study, a novel melanoma recognition system (MRS) is presented by focusing more on extracting features from the lesions using ABCD criteria. The complete MRS system consists of the following six major steps: transformation to the CIEL*a*b* color space, preprocessing to enhance the tumor region, black-frame and hair artifacts removal, tumor-area segmentation, quantification of feature using ABCD criteria and normalization, and finally feature selection and classification. The MRS system for melanoma-nevus lesions is tested on a total of 120 dermoscopic images. To test the performance of the MRS diagnostic classifier, the area under the receiver operating characteristics curve (AUC) is utilized. The proposed classifier achieved a sensitivity of 88.2%, specificity of 91.3%, and AUC of 0.880. The experimental results show that the proposed MRS system can accurately distinguish between malignant and benign lesions. The MRS technique is fully automatic and can easily integrate to an existing CAD system. To increase the classification accuracy of MRS, the CASH pattern recognition technique, visual inspection of dermatologist, contextual information from the patients, and the histopathological tests can be included to investigate the impact with this system. © 2012 John Wiley & Sons A/S.

  17. Surface defect detection in tiling Industries using digital image processing methods: analysis and evaluation.

    PubMed

    Karimi, Mohammad H; Asemani, Davud

    2014-05-01

    Ceramic and tile industries should indispensably include a grading stage to quantify the quality of products. Actually, human control systems are often used for grading purposes. An automatic grading system is essential to enhance the quality control and marketing of the products. Since there generally exist six different types of defects originating from various stages of tile manufacturing lines with distinct textures and morphologies, many image processing techniques have been proposed for defect detection. In this paper, a survey has been made on the pattern recognition and image processing algorithms which have been used to detect surface defects. Each method appears to be limited for detecting some subgroup of defects. The detection techniques may be divided into three main groups: statistical pattern recognition, feature vector extraction and texture/image classification. The methods such as wavelet transform, filtering, morphology and contourlet transform are more effective for pre-processing tasks. Others including statistical methods, neural networks and model-based algorithms can be applied to extract the surface defects. Although, statistical methods are often appropriate for identification of large defects such as Spots, but techniques such as wavelet processing provide an acceptable response for detection of small defects such as Pinhole. A thorough survey is made in this paper on the existing algorithms in each subgroup. Also, the evaluation parameters are discussed including supervised and unsupervised parameters. Using various performance parameters, different defect detection algorithms are compared and evaluated. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Design of a Borescope for Extravehicular Non-Destructive Applications

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic

    2003-01-01

    Anomalies such as corrosion, structural damage, misalignment, cracking, stress fiactures, pitting, or wear can be detected and monitored by the aid of a borescope. A borescope requires a source of light for proper operation. Today s current lighting technology market consists of incandescent lamps, fluorescent lamps and other types of electric arc and electric discharge vapor lamp. Recent advances in LED technology have made LEDs viable for a number of applications, including vehicle stoplights, traffic lights, machine-vision-inspection, illumination, and street signs. LEDs promise significant reduction in power consumption compared to other sources of light. This project focused on comparing images taken by the Olympus IPLEX, using two different light sources. One of the sources is the 50-W internal metal halide lamp and the other is a 1 W LED placed at the tip of the insertion tube. Images acquired using these two light sources were quantitatively compared using their histogram, intensity profile along a line segment, and edge detection. Also, images were qualitatively compared using image registration and transformation [l]. The gray-level histogram, edge detection, image profile and image registration do not offer conclusive results. The LED light source, however, produces good images for visual inspection by an operator. Analysis using pattern recognition using Eigenfaces and Gaussian Pyramid in face recognition may be more useful.

  19. Automatic face recognition in HDR imaging

    NASA Astrophysics Data System (ADS)

    Pereira, Manuela; Moreno, Juan-Carlos; Proença, Hugo; Pinheiro, António M. G.

    2014-05-01

    The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.

  20. Dilated contour extraction and component labeling algorithm for object vector representation

    NASA Astrophysics Data System (ADS)

    Skourikhine, Alexei N.

    2005-08-01

    Object boundary extraction from binary images is important for many applications, e.g., image vectorization, automatic interpretation of images containing segmentation results, printed and handwritten documents and drawings, maps, and AutoCAD drawings. Efficient and reliable contour extraction is also important for pattern recognition due to its impact on shape-based object characterization and recognition. The presented contour tracing and component labeling algorithm produces dilated (sub-pixel) contours associated with corresponding regions. The algorithm has the following features: (1) it always produces non-intersecting, non-degenerate contours, including the case of one-pixel wide objects; (2) it associates the outer and inner (i.e., around hole) contours with the corresponding regions during the process of contour tracing in a single pass over the image; (3) it maintains desired connectivity of object regions as specified by 8-neighbor or 4-neighbor connectivity of adjacent pixels; (4) it avoids degenerate regions in both background and foreground; (5) it allows an easy augmentation that will provide information about the containment relations among regions; (6) it has a time complexity that is dominantly linear in the number of contour points. This early component labeling (contour-region association) enables subsequent efficient object-based processing of the image information.

  1. Deep Filter Banks for Texture Recognition, Description, and Segmentation.

    PubMed

    Cimpoi, Mircea; Maji, Subhransu; Kokkinos, Iasonas; Vedaldi, Andrea

    Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.

  2. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  3. A portable array biosensor for food safety

    NASA Astrophysics Data System (ADS)

    Golden, Joel P.; Ngundi, Miriam M.; Shriver-Lake, Lisa C.; Taitt, Chris R.; Ligler, Frances S.

    2004-11-01

    An array biosensor developed for simultaneous analysis of multiple samples has been utilized to develop assays for toxins and pathogens in a variety of foods. The biochemical component of the multi-analyte biosensor consists of a patterned array of biological recognition elements immobilized on the surface of a planar waveguide. A fluorescence assay is performed on the patterned surface, yielding an array of fluorescent spots, the locations of which are used to identify what analyte is present. Signal transduction is accomplished by means of a diode laser for fluorescence excitation, optical filters and a CCD camera for image capture. A laptop computer controls the miniaturized fluidics system and image capture. Results for four mycotoxin competition assays in buffer and food samples are presented.

  4. Hyperspectral image analysis using artificial color

    NASA Astrophysics Data System (ADS)

    Fu, Jian; Caulfield, H. John; Wu, Dongsheng; Tadesse, Wubishet

    2010-03-01

    By definition, HSC (HyperSpectral Camera) images are much richer in spectral data than, say, a COTS (Commercial-Off-The-Shelf) color camera. But data are not information. If we do the task right, useful information can be derived from the data in HSC images. Nature faced essentially the identical problem. The incident light is so complex spectrally that measuring it with high resolution would provide far more data than animals can handle in real time. Nature's solution was to do irreversible POCS (Projections Onto Convex Sets) to achieve huge reductions in data with minimal reduction in information. Thus we can arrange for our manmade systems to do what nature did - project the HSC image onto two or more broad, overlapping curves. The task we have undertaken in the last few years is to develop this idea that we call Artificial Color. What we report here is the use of the measured HSC image data projected onto two or three convex, overlapping, broad curves in analogy with the sensitivity curves of human cone cells. Testing two quite different HSC images in that manner produced the desired result: good discrimination or segmentation that can be done very simply and hence are likely to be doable in real time with specialized computers. Using POCS on the HSC data to reduce the processing complexity produced excellent discrimination in those two cases. For technical reasons discussed here, the figures of merit for the kind of pattern recognition we use is incommensurate with the figures of merit of conventional pattern recognition. We used some force fitting to make a comparison nevertheless, because it shows what is also obvious qualitatively. In our tasks our method works better.

  5. Pattern Recognition Using Artificial Neural Network: A Review

    NASA Astrophysics Data System (ADS)

    Kim, Tai-Hoon

    Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, artificial neural network techniques theory have been receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the well-known methods used in various stages of a pattern recognition system using ANN and identify research topics and applications which are at the forefront of this exciting and challenging field.

  6. a Review on State-Of Face Recognition Approaches

    NASA Astrophysics Data System (ADS)

    Mahmood, Zahid; Muhammad, Nazeer; Bibi, Nargis; Ali, Tauseef

    Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.

  7. Analysis of Variance in Statistical Image Processing

    NASA Astrophysics Data System (ADS)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  8. Feature-extracted joint transform correlation.

    PubMed

    Alam, M S

    1995-12-10

    A new technique for real-time optical character recognition that uses a joint transform correlator is proposed. This technique employs feature-extracted patterns for the reference image to detect a wide range of characters in one step. The proposed technique significantly enhances the processing speed when compared with the presently available joint transform correlator architectures and shows feasibility for multichannel joint transform correlation.

  9. Flight Mechanics/Estimation Theory Symposium. [with application to autonomous navigation and attitude/orbit determination

    NASA Technical Reports Server (NTRS)

    Fuchs, A. J. (Editor)

    1979-01-01

    Onboard and real time image processing to enhance geometric correction of the data is discussed with application to autonomous navigation and attitude and orbit determination. Specific topics covered include: (1) LANDSAT landmark data; (2) star sensing and pattern recognition; (3) filtering algorithms for Global Positioning System; and (4) determining orbital elements for geostationary satellites.

  10. Recognition Alters the Spatial Pattern of fMRI Activation in Early Retinotopic Cortex

    PubMed Central

    Vul, E.; Kanwisher, N.

    2010-01-01

    Early retinotopic cortex has traditionally been viewed as containing a veridical representation of the low-level properties of the image, not imbued by high-level interpretation and meaning. Yet several recent results indicate that neural representations in early retinotopic cortex reflect not just the sensory properties of the image, but also the perceived size and brightness of image regions. Here we used functional magnetic resonance imaging pattern analyses to ask whether the representation of an object in early retinotopic cortex changes when the object is recognized compared with when the same stimulus is presented but not recognized. Our data confirmed this hypothesis: the pattern of response in early retinotopic visual cortex to a two-tone “Mooney” image of an object was more similar to the response to the full grayscale photo version of the same image when observers knew what the two-tone image represented than when they did not. Further, in a second experiment, high-level interpretations actually overrode bottom-up stimulus information, such that the pattern of response in early retinotopic cortex to an identified two-tone image was more similar to the response to the photographic version of that stimulus than it was to the response to the identical two-tone image when it was not identified. Our findings are consistent with prior results indicating that perceived size and brightness affect representations in early retinotopic visual cortex and, further, show that even higher-level information—knowledge of object identity—also affects the representation of an object in early retinotopic cortex. PMID:20071627

  11. The Shepherd's Crook Sign: A New Neuroimaging Pareidolia in Joubert Syndrome.

    PubMed

    Manley, Andrew T; Maertens, Paul M

    2015-01-01

    By pareidolically recognizing specific patterns indicative of particular diseases, neuroimagers reinforce their mnemonic strategies and improve their neuroimaging diagnostic skills. Joubert Syndrome (JS) is an autosomal recessive disorder characterized clinically by mental retardation, episodes of abnormal deep and rapid breathing, abnormal eye movements, and ataxia. Many neuroimaging signs characteristic of JS have been reported. In retrospective case study, two consanguineous neonates diagnosed with JS were evaluated with brain magnetic resonance imaging (MRI), computed tomography (CT), and neurosonography. Both cranial ultrasound and MRI of the brain showed the characteristic molar tooth sign. There was a shepherd's crook in the sagittal views of the posterior fossa where the shaft of the crook is made by the brainstem and the pons. The arc of the crook is made by the abnormal superior cerebellar peduncle and cerebellar hemisphere. By ultrasound, the shepherd's crook sign was seen through the posterior fontanelle only. CT imaging also showed the shepherd's crook sign. Neuroimaging diagnosis of JS, which already involves the pareidolical recognition of specific patterns indicative of the disease, can be improved by recognition of the shepherd's crook sign on MRI, CT, and cranial ultrasound. Copyright © 2014 by the American Society of Neuroimaging.

  12. Practical protocols for fast histopathology by Fourier transform infrared spectroscopic imaging

    NASA Astrophysics Data System (ADS)

    Keith, Frances N.; Reddy, Rohith K.; Bhargava, Rohit

    2008-02-01

    Fourier transform infrared (FT-IR) spectroscopic imaging is an emerging technique that combines the molecular selectivity of spectroscopy with the spatial specificity of optical microscopy. We demonstrate a new concept in obtaining high fidelity data using commercial array detectors coupled to a microscope and Michelson interferometer. Next, we apply the developed technique to rapidly provide automated histopathologic information for breast cancer. Traditionally, disease diagnoses are based on optical examinations of stained tissue and involve a skilled recognition of morphological patterns of specific cell types (histopathology). Consequently, histopathologic determinations are a time consuming, subjective process with innate intra- and inter-operator variability. Utilizing endogenous molecular contrast inherent in vibrational spectra, specially designed tissue microarrays and pattern recognition of specific biochemical features, we report an integrated algorithm for automated classifications. The developed protocol is objective, statistically significant and, being compatible with current tissue processing procedures, holds potential for routine clinical diagnoses. We first demonstrate that the classification of tissue type (histology) can be accomplished in a manner that is robust and rigorous. Since data quality and classifier performance are linked, we quantify the relationship through our analysis model. Last, we demonstrate the application of the minimum noise fraction (MNF) transform to improve tissue segmentation.

  13. Automated classification of articular cartilage surfaces based on surface texture.

    PubMed

    Stachowiak, G P; Stachowiak, G W; Podsiadlo, P

    2006-11-01

    In this study the automated classification system previously developed by the authors was used to classify articular cartilage surfaces with different degrees of wear. This automated system classifies surfaces based on their texture. Plug samples of sheep cartilage (pins) were run on stainless steel discs under various conditions using a pin-on-disc tribometer. Testing conditions were specifically designed to produce different severities of cartilage damage due to wear. Environmental scanning electron microscope (SEM) (ESEM) images of cartilage surfaces, that formed a database for pattern recognition analysis, were acquired. The ESEM images of cartilage were divided into five groups (classes), each class representing different wear conditions or wear severity. Each class was first examined and assessed visually. Next, the automated classification system (pattern recognition) was applied to all classes. The results of the automated surface texture classification were compared to those based on visual assessment of surface morphology. It was shown that the texture-based automated classification system was an efficient and accurate method of distinguishing between various cartilage surfaces generated under different wear conditions. It appears that the texture-based classification method has potential to become a useful tool in medical diagnostics.

  14. Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders

    ERIC Educational Resources Information Center

    Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia

    2006-01-01

    Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…

  15. Automatic detection and recognition of traffic signs in stereo images based on features and probabilistic neural networks

    NASA Astrophysics Data System (ADS)

    Sheng, Yehua; Zhang, Ka; Ye, Chun; Liang, Cheng; Li, Jian

    2008-04-01

    Considering the problem of automatic traffic sign detection and recognition in stereo images captured under motion conditions, a new algorithm for traffic sign detection and recognition based on features and probabilistic neural networks (PNN) is proposed in this paper. Firstly, global statistical color features of left image are computed based on statistics theory. Then for red, yellow and blue traffic signs, left image is segmented to three binary images by self-adaptive color segmentation method. Secondly, gray-value projection and shape analysis are used to confirm traffic sign regions in left image. Then stereo image matching is used to locate the homonymy traffic signs in right image. Thirdly, self-adaptive image segmentation is used to extract binary inner core shapes of detected traffic signs. One-dimensional feature vectors of inner core shapes are computed by central projection transformation. Fourthly, these vectors are input to the trained probabilistic neural networks for traffic sign recognition. Lastly, recognition results in left image are compared with recognition results in right image. If results in stereo images are identical, these results are confirmed as final recognition results. The new algorithm is applied to 220 real images of natural scenes taken by the vehicle-borne mobile photogrammetry system in Nanjing at different time. Experimental results show a detection and recognition rate of over 92%. So the algorithm is not only simple, but also reliable and high-speed on real traffic sign detection and recognition. Furthermore, it can obtain geometrical information of traffic signs at the same time of recognizing their types.

  16. Fast Legendre moment computation for template matching

    NASA Astrophysics Data System (ADS)

    Li, Bing C.

    2017-05-01

    Normalized cross correlation (NCC) based template matching is insensitive to intensity changes and it has many applications in image processing, object detection, video tracking and pattern recognition. However, normalized cross correlation implementation is computationally expensive since it involves both correlation computation and normalization implementation. In this paper, we propose Legendre moment approach for fast normalized cross correlation implementation and show that the computational cost of this proposed approach is independent of template mask sizes which is significantly faster than traditional mask size dependent approaches, especially for large mask templates. Legendre polynomials have been widely used in solving Laplace equation in electrodynamics in spherical coordinate systems, and solving Schrodinger equation in quantum mechanics. In this paper, we extend Legendre polynomials from physics to computer vision and pattern recognition fields, and demonstrate that Legendre polynomials can help to reduce the computational cost of NCC based template matching significantly.

  17. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Understanding eye movements in face recognition using hidden Markov models.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.

  19. Machine parts recognition using a trinary associative memory

    NASA Technical Reports Server (NTRS)

    Awwal, Abdul Ahad S.; Karim, Mohammad A.; Liu, Hua-Kuang

    1989-01-01

    The convergence mechanism of vectors in Hopfield's neural network in relation to recognition of partially known patterns is studied in terms of both inner products and Hamming distance. It has been shown that Hamming distance should not always be used in determining the convergence of vectors. Instead, inner product weighting coefficients play a more dominant role in certain data representations for determining the convergence mechanism. A trinary neuron representation for associative memory is found to be more effective for associative recall. Applications of the trinary associative memory to reconstruct machine part images that are partially missing are demonstrated by means of computer simulation as examples of the usefulness of this approach.

  20. Machine Parts Recognition Using A Trinary Associative Memory

    NASA Astrophysics Data System (ADS)

    Awwal, Abdul A. S.; Karim, Mohammad A.; Liu, Hua-Kuang

    1989-05-01

    The convergence mechanism of vectors in Hopfield's neural network in relation to recognition of partially known patterns is studied in terms of both inner products and Hamming distance. It has been shown that Hamming distance should not always be used in determining the convergence of vectors. Instead, inner product weighting coefficients play a more dominant role in certain data representations for determining the convergence mechanism. A trinary neuron representation for associative memory is found to be more effective for associative recall. Applications of the trinary associative memory to reconstruct machine part images that are partially missing are demonstrated by means of computer simulation as examples of the usefulness of this approach.

  1. A survey of visual preprocessing and shape representation techniques

    NASA Technical Reports Server (NTRS)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  2. Optical implementation of a feature-based neural network with application to automatic target recognition

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1993-01-01

    An optical neural network based on the neocognitron paradigm is introduced. A novel aspect of the architecture design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by feeding back the ouput of the feature correlator interatively to the input spatial light modulator and by updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intraclass fault tolerance and interclass discrimination is achieved. A detailed system description is provided. Experimental demonstrations of a two-layer neural network for space-object discrimination is also presented.

  3. Automatic target recognition using a feature-based optical neural network

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    An optical neural network based upon the Neocognitron paradigm (K. Fukushima et al. 1983) is introduced. A novel aspect of the architectural design is shift-invariant multichannel Fourier optical correlation within each processing layer. Multilayer processing is achieved by iteratively feeding back the output of the feature correlator to the input spatial light modulator and updating the Fourier filters. By training the neural net with characteristic features extracted from the target images, successful pattern recognition with intra-class fault tolerance and inter-class discrimination is achieved. A detailed system description is provided. Experimental demonstration of a two-layer neural network for space objects discrimination is also presented.

  4. Smart sensor for terminal homing

    NASA Astrophysics Data System (ADS)

    Panda, D.; Aggarwal, R.; Hummel, R.

    1980-01-01

    The practical scene matching problem is considered to present certain complications which must extend classical image processing capabilities. Certain aspects of the scene matching problem which must be addressed by a smart sensor for terminal homing are discussed. First a philosophy for treating the matching problem for the terminal homing scenario is outlined. Then certain aspects of the feature extraction process and symbolic pattern matching are considered. It is thought that in the future general ideas from artificial intelligence will be more useful for terminal homing requirements of fast scene recognition and pattern matching.

  5. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    PubMed

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  6. Predicting consumer behavior: using novel mind-reading approaches.

    PubMed

    Calvert, Gemma A; Brammer, Michael J

    2012-01-01

    Advances in machine learning as applied to functional magnetic resonance imaging (fMRI) data offer the possibility of pretesting and classifying marketing communications using unbiased pattern recognition algorithms. By using these algorithms to analyze brain responses to brands, products, or existing marketing communications that either failed or succeeded in the marketplace and identifying the patterns of brain activity that characterize success or failure, future planned campaigns or new products can now be pretested to determine how well the resulting brain responses match the desired (successful) pattern of brain activity without the need for verbal feedback. This major advance in signal processing is poised to revolutionize the application of these brain-imaging techniques in the marketing sector by offering greater accuracy of prediction in terms of consumer acceptance of new brands, products, and campaigns at a speed that makes them accessible as routine pretesting tools that will clearly demonstrate return on investment.

  7. Person-independent facial expression analysis by fusing multiscale cell features

    NASA Astrophysics Data System (ADS)

    Zhou, Lubing; Wang, Han

    2013-03-01

    Automatic facial expression recognition is an interesting and challenging task. To achieve satisfactory accuracy, deriving a robust facial representation is especially important. A novel appearance-based feature, the multiscale cell local intensity increasing patterns (MC-LIIP), to represent facial images and conduct person-independent facial expression analysis is presented. The LIIP uses a decimal number to encode the texture or intensity distribution around each pixel via pixel-to-pixel intensity comparison. To boost noise resistance, MC-LIIP carries out comparison computation on the average values of scalable cells instead of individual pixels. The facial descriptor fuses region-based histograms of MC-LIIP features from various scales, so as to encode not only textural microstructures but also the macrostructures of facial images. Finally, a support vector machine classifier is applied for expression recognition. Experimental results on the CK+ and Karolinska directed emotional faces databases show the superiority of the proposed method.

  8. Hough transform for human action recognition

    NASA Astrophysics Data System (ADS)

    Siemon, Mia S. N.

    2016-09-01

    Nowadays, the demand of computer analysis, especially regarding team sports, continues drastically growing. More and more decisions are made by electronic devices for the live to become `easier' to a certain context. There already exist application areas in sports, during which critical situations are being handled by means of digital software. This paper aims at the evaluation and introduction to the necessary foundation which would make it possible to develop a concept similar to that of `hawk-eye', a decision-making program to evaluate the impact of a ball with respect to a target line and to apply it to the sport of volleyball. The pattern recognition process is in this case performed by means of the mathematical model of Hough transform which is able of identifying relevant lines and circles in the image in order to later on use them for the necessary evaluation of the image and the decision-making process.

  9. Fuzzy automata and pattern matching

    NASA Technical Reports Server (NTRS)

    Setzer, C. B.; Warsi, N. A.

    1986-01-01

    A wide-ranging search for articles and books concerned with fuzzy automata and syntactic pattern recognition is presented. A number of survey articles on image processing and feature detection were included. Hough's algorithm is presented to illustrate the way in which knowledge about an image can be used to interpret the details of the image. It was found that in hand generated pictures, the algorithm worked well on following the straight lines, but had great difficulty turning corners. An algorithm was developed which produces a minimal finite automaton recognizing a given finite set of strings. One difficulty of the construction is that, in some cases, this minimal automaton is not unique for a given set of strings and a given maximum length. This algorithm compares favorably with other inference algorithms. More importantly, the algorithm produces an automaton with a rigorously described relationship to the original set of strings that does not depend on the algorithm itself.

  10. Minimum Bayes risk image correlation

    NASA Technical Reports Server (NTRS)

    Minter, T. C., Jr.

    1980-01-01

    In this paper, the problem of designing a matched filter for image correlation will be treated as a statistical pattern recognition problem. It is shown that, by minimizing a suitable criterion, a matched filter can be estimated which approximates the optimum Bayes discriminant function in a least-squares sense. It is well known that the use of the Bayes discriminant function in target classification minimizes the Bayes risk, which in turn directly minimizes the probability of a false fix. A fast Fourier implementation of the minimum Bayes risk correlation procedure is described.

  11. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  12. Social Cognition in Williams Syndrome: Face Tuning

    PubMed Central

    Pavlova, Marina A.; Heiz, Julie; Sokolov, Alexander N.; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing. PMID:27531986

  13. Social Cognition in Williams Syndrome: Face Tuning.

    PubMed

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  14. Automatic identification of species with neural networks.

    PubMed

    Hernández-Serna, Andrés; Jiménez-Segura, Luz Fernanda

    2014-01-01

    A new automatic identification system using photographic images has been designed to recognize fish, plant, and butterfly species from Europe and South America. The automatic classification system integrates multiple image processing tools to extract the geometry, morphology, and texture of the images. Artificial neural networks (ANNs) were used as the pattern recognition method. We tested a data set that included 740 species and 11,198 individuals. Our results show that the system performed with high accuracy, reaching 91.65% of true positive fish identifications, 92.87% of plants and 93.25% of butterflies. Our results highlight how the neural networks are complementary to species identification.

  15. Fast linear feature detection using multiple directional non-maximum suppression.

    PubMed

    Sun, C; Vallotton, P

    2009-05-01

    The capacity to detect linear features is central to image analysis, computer vision and pattern recognition and has practical applications in areas such as neurite outgrowth detection, retinal vessel extraction, skin hair removal, plant root analysis and road detection. Linear feature detection often represents the starting point for image segmentation and image interpretation. In this paper, we present a new algorithm for linear feature detection using multiple directional non-maximum suppression with symmetry checking and gap linking. Given its low computational complexity, the algorithm is very fast. We show in several examples that it performs very well in terms of both sensitivity and continuity of detected linear features.

  16. Robust Feature Matching in Terrestrial Image Sequences

    NASA Astrophysics Data System (ADS)

    Abbas, A.; Ghuffar, S.

    2018-04-01

    From the last decade, the feature detection, description and matching techniques are most commonly exploited in various photogrammetric and computer vision applications, which includes: 3D reconstruction of scenes, image stitching for panoramic creation, image classification, or object recognition etc. However, in terrestrial imagery of urban scenes contains various issues, which include duplicate and identical structures (i.e. repeated windows and doors) that cause the problem in feature matching phase and ultimately lead to failure of results specially in case of camera pose and scene structure estimation. In this paper, we will address the issue related to ambiguous feature matching in urban environment due to repeating patterns.

  17. [Research progress of multi-model medical image fusion and recognition].

    PubMed

    Zhou, Tao; Lu, Huiling; Chen, Zhiqiang; Ma, Jingxian

    2013-10-01

    Medical image fusion and recognition has a wide range of applications, such as focal location, cancer staging and treatment effect assessment. Multi-model medical image fusion and recognition are analyzed and summarized in this paper. Firstly, the question of multi-model medical image fusion and recognition is discussed, and its advantage and key steps are discussed. Secondly, three fusion strategies are reviewed from the point of algorithm, and four fusion recognition structures are discussed. Thirdly, difficulties, challenges and possible future research direction are discussed.

  18. Principal curve detection in complicated graph images

    NASA Astrophysics Data System (ADS)

    Liu, Yuncai; Huang, Thomas S.

    2001-09-01

    Finding principal curves in an image is an important low level processing in computer vision and pattern recognition. Principal curves are those curves in an image that represent boundaries or contours of objects of interest. In general, a principal curve should be smooth with certain length constraint and allow either smooth or sharp turning. In this paper, we present a method that can efficiently detect principal curves in complicated map images. For a given feature image, obtained from edge detection of an intensity image or thinning operation of a pictorial map image, the feature image is first converted to a graph representation. In graph image domain, the operation of principal curve detection is performed to identify useful image features. The shortest path and directional deviation schemes are used in our algorithm os principal verve detection, which is proven to be very efficient working with real graph images.

  19. Computer interpretation of thallium SPECT studies based on neural network analysis

    NASA Astrophysics Data System (ADS)

    Wang, David C.; Karvelis, K. C.

    1991-06-01

    A class of artificial intelligence (Al) programs known as neural networks are well suited to pattern recognition. A neural network is trained rather than programmed to recognize patterns. This differs from "expert system" Al programs in that it is not following an extensive set of rules determined by the programmer, but rather bases its decision on a gestalt interpretation of the image. The "bullseye" images from cardiac stress thallium tests performed on 50 male patients, as well as several simulated images were used to train the network. The network was able to accurately classify all patients in the training set. The network was then tested against 50 unknown patients and was able to correctly categorize 77% of the areas of ischemia and 92% of the areas of infarction. While not yet matching the ability of a trained physician, the neural network shows great promise in this area and has potential application in other areas of medical imaging.

  20. Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning.

    PubMed

    Cha, Kenny H; Hadjiiski, Lubomir; Chan, Heang-Ping; Weizer, Alon Z; Alva, Ajjai; Cohan, Richard H; Caoili, Elaine M; Paramagul, Chintana; Samala, Ravi K

    2017-08-18

    Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response.

  1. Pseudo-orthogonalization of memory patterns for associative memory.

    PubMed

    Oku, Makito; Makino, Takaki; Aihara, Kazuyuki

    2013-11-01

    A new method for improving the storage capacity of associative memory models on a neural network is proposed. The storage capacity of the network increases in proportion to the network size in the case of random patterns, but, in general, the capacity suffers from correlation among memory patterns. Numerous solutions to this problem have been proposed so far, but their high computational cost limits their scalability. In this paper, we propose a novel and simple solution that is locally computable without any iteration. Our method involves XNOR masking of the original memory patterns with random patterns, and the masked patterns and masks are concatenated. The resulting decorrelated patterns allow higher storage capacity at the cost of the pattern length. Furthermore, the increase in the pattern length can be reduced through blockwise masking, which results in a small amount of capacity loss. Movie replay and image recognition are presented as examples to demonstrate the scalability of the proposed method.

  2. Chinese Herbal Medicine Image Recognition and Retrieval by Convolutional Neural Network

    PubMed Central

    Sun, Xin; Qian, Huinan

    2016-01-01

    Chinese herbal medicine image recognition and retrieval have great potential of practical applications. Several previous studies have focused on the recognition with hand-crafted image features, but there are two limitations in them. Firstly, most of these hand-crafted features are low-level image representation, which is easily affected by noise and background. Secondly, the medicine images are very clean without any backgrounds, which makes it difficult to use in practical applications. Therefore, designing high-level image representation for recognition and retrieval in real world medicine images is facing a great challenge. Inspired by the recent progress of deep learning in computer vision, we realize that deep learning methods may provide robust medicine image representation. In this paper, we propose to use the Convolutional Neural Network (CNN) for Chinese herbal medicine image recognition and retrieval. For the recognition problem, we use the softmax loss to optimize the recognition network; then for the retrieval problem, we fine-tune the recognition network by adding a triplet loss to search for the most similar medicine images. To evaluate our method, we construct a public database of herbal medicine images with cluttered backgrounds, which has in total 5523 images with 95 popular Chinese medicine categories. Experimental results show that our method can achieve the average recognition precision of 71% and the average retrieval precision of 53% over all the 95 medicine categories, which are quite promising given the fact that the real world images have multiple pieces of occluded herbal and cluttered backgrounds. Besides, our proposed method achieves the state-of-the-art performance by improving previous studies with a large margin. PMID:27258404

  3. Architecture of the parallel hierarchical network for fast image recognition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  4. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  5. Binary zone-plate array for a parallel joint transform correlator applied to face recognition.

    PubMed

    Kodate, K; Hashimoto, A; Thapliya, R

    1999-05-10

    Taking advantage of small aberrations, high efficiency, and compactness, we developed a new, to our knowledge, design procedure for a binary zone-plate array (BZPA) and applied it to a parallel joint transform correlator for the recognition of the human face. Pairs of reference and unknown images of faces are displayed on a liquid-crystal spatial light modulator (SLM), Fourier transformed by the BZPA, intensity recorded on an optically addressable SLM, and inversely Fourier transformed to obtain correlation signals. Consideration of the bandwidth allows the relations among the channel number, the numerical aperture of the zone plates, and the pattern size to be determined. Experimentally a five-channel parallel correlator was implemented and tested successfully with a 100-person database. The design and the fabrication of a 20-channel BZPA for phonetic character recognition are also included.

  6. Famous faces and voices: Differential profiles in early right and left semantic dementia and in Alzheimer's disease.

    PubMed

    Luzzi, Simona; Baldinelli, Sara; Ranaldi, Valentina; Fabi, Katia; Cafazzo, Viviana; Fringuelli, Fabio; Silvestrini, Mauro; Provinciali, Leandro; Reverberi, Carlo; Gainotti, Guido

    2017-01-08

    Famous face and voice recognition is reported to be impaired both in semantic dementia (SD) and in Alzheimer's Disease (AD), although more severely in the former. In AD a coexistence of perceptual impairment in face and voice processing has also been reported and this could contribute to the altered performance in complex semantic tasks. On the other hand, in SD both face and voice recognition disorders could be related to the prevalence of atrophy in the right temporal lobe (RTL). The aim of the present study was twofold: (1) to investigate famous faces and voices recognition in SD and AD to verify if the two diseases show a differential pattern of impairment, resulting from disruption of different cognitive mechanisms; (2) to check if face and voice recognition disorders prevail in patients with atrophy mainly affecting the RTL. To avoid the potential influence of primary perceptual problems in face and voice recognition, a pool of patients suffering from early SD and AD were administered a detailed set of tests exploring face and voice perception. Thirteen SD (8 with prevalence of right and 5 with prevalence of left temporal atrophy) and 25 CE patients, who did not show visual and auditory perceptual impairment, were finally selected and were administered an experimental battery exploring famous face and voice recognition and naming. Twelve SD patients underwent cerebral PET imaging and were classified in right and left SD according to the onset modality and to the prevalent decrease in FDG uptake in right or left temporal lobe respectively. Correlation of PET imaging and famous face and voice recognition was performed. Results showed a differential performance profile in the two diseases, because AD patients were significantly impaired in the naming tests, but showed preserved recognition, whereas SD patients were profoundly impaired both in naming and in recognition of famous faces and voices. Furthermore, face and voice recognition disorders prevailed in SD patients with RTL atrophy, who also showed a conceptual impairment on the Pyramids and Palm Trees test more important in the pictorial than in the verbal modality. Finally, in 12SD patients in whom PET was available, a strong correlation between FDG uptake and face-to-name and voice-to-name matching data was found in the right but not in the left temporal lobe. The data support the hypothesis of a different cognitive basis for impairment of face and voice recognition in the two dementias and suggest that the pattern of impairment in SD may be due to a loss of semantic representations, while a defect of semantic control, with impaired naming and preserved recognition might be hypothesized in AD. Furthermore, the correlation between face and voice recognition disorders and RTL damage are consistent with the hypothesis assuming that in the RTL person-specific knowledge may be mainly based upon non-verbal representations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Pattern activation/recognition theory of mind

    PubMed Central

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a “Pattern Recognition Theory of Mind” that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call “Pattern Activation/Recognition Theory of Mind.” While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation. PMID:26236228

  8. Pattern activation/recognition theory of mind.

    PubMed

    du Castel, Bertrand

    2015-01-01

    In his 2012 book How to Create a Mind, Ray Kurzweil defines a "Pattern Recognition Theory of Mind" that states that the brain uses millions of pattern recognizers, plus modules to check, organize, and augment them. In this article, I further the theory to go beyond pattern recognition and include also pattern activation, thus encompassing both sensory and motor functions. In addition, I treat checking, organizing, and augmentation as patterns of patterns instead of separate modules, therefore handling them the same as patterns in general. Henceforth I put forward a unified theory I call "Pattern Activation/Recognition Theory of Mind." While the original theory was based on hierarchical hidden Markov models, this evolution is based on their precursor: stochastic grammars. I demonstrate that a class of self-describing stochastic grammars allows for unifying pattern activation, recognition, organization, consistency checking, metaphor, and learning, into a single theory that expresses patterns throughout. I have implemented the model as a probabilistic programming language specialized in activation/recognition grammatical and neural operations. I use this prototype to compute and present diagrams for each stochastic grammar and corresponding neural circuit. I then discuss the theory as it relates to artificial network developments, common coding, neural reuse, and unity of mind, concluding by proposing potential paths to validation.

  9. Spectrum of MRI brain lesion patterns in neuromyelitis optica spectrum disorder: a pictorial review.

    PubMed

    Wang, Kevin Yuqi; Chetta, Justin; Bains, Pavit; Balzer, Anthony; Lincoln, John; Uribe, Tomas; Lincoln, Christie M

    2018-06-01

    Neuromyelitis optica is a neurotropic autoimmune inflammatory disease of the central nervous system traditionally thought to exclusively involve the optic nerves and spinal cord. With the discovery of the disease-specific aquaporin-4 antibody and the increasing recognition of clinical and characteristic imaging patterns of brain involvement in what is now termed neuromyelitis optica spectrum disorder (NMOSD), MRI now plays a greater role in diagnosis of NMOSD based on the 2015 consensus criteria and in distinguishing it from other inflammatory disorders, particularly multiple sclerosis (MS). Several brain lesion patterns are highly suggestive of NMOSD, whereas others may serve as red flags. Specifically, long corticospinal lesions, hemispheric cerebral white matter lesions and periependymal lesions in the diencephalon, dorsal brainstem and white matter adjacent to lateral ventricles are typical of NMOSD. In contrast, juxtacortical, cortical, or lesions perpendicularly oriented to the surface of the lateral ventricle suggests MS as the diagnosis. Ultimately, a strong recognition of the spectrum of MRI brain findings in NMOSD is essential for accurate diagnosis, and particularly in differentiating from MS. This pictorial review highlights the spectrum of characteristic brain lesion patterns that may be seen in NMOSD and further delineates findings that may help distinguish it from MS.

  10. Image remapping strategies applied as protheses for the visually impaired

    NASA Technical Reports Server (NTRS)

    Johnson, Curtis D.

    1993-01-01

    Maculopathy and retinitis pigmentosa (rp) are two vision defects which render the afflicted person with impaired ability to read and recognize visual patterns. For some time there has been interest and work on the use of image remapping techniques to provide a visual aid for individuals with these impairments. The basic concept is to remap an image according to some mathematical transformation such that the image is warped around a maculopathic defect (scotoma) or within the rp foveal region of retinal sensitivity. NASA/JSC has been pursuing this research using angle invariant transformations with testing of the resulting remapping using subjects and facilities of the University of Houston, College of Optometry. Testing is facilitated by use of a hardware device, the Programmable Remapper, to provide the remapping of video images. This report presents the results of studies of alternative remapping transformations with the objective of improving subject reading rates and pattern recognition. In particular a form of conformal transformation was developed which provides for a smooth warping of an image around a scotoma. In such a case it is shown that distortion of characters and lines of characters is minimized which should lead to enhanced character recognition. In addition studies were made of alternative transformations which, although not conformal, provide for similar low character distortion remapping. A second, non-conformal transformation was studied for remapping of images to aid rp impairments. In this case a transformation was investigated which allows remapping of a vision field into a circular area representing the foveal retina region. The size and spatial representation of the image are selectable. It is shown that parametric adjustments allow for a wide variation of how a visual field is presented to the sensitive retina. This study also presents some preliminary considerations of how a prosthetic device could be implemented in a practical sense, vis-a-vis, size, weight and portability.

  11. It Takes Two–Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres

    PubMed Central

    Bilalić, Merim; Kiesel, Andrea; Pohl, Carsten; Erb, Michael; Grodd, Wolfgang

    2011-01-01

    Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas. PMID:21283683

  12. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  13. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  14. Visual based laser speckle pattern recognition method for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Park, Kyeongtaek; Torbol, Marco

    2017-04-01

    This study performed the system identification of a target structure by analyzing the laser speckle pattern taken by a camera. The laser speckle pattern is generated by the diffuse reflection of the laser beam on a rough surface of the target structure. The camera, equipped with a red filter, records the scattered speckle particles of the laser light in real time and the raw speckle image of the pixel data is fed to the graphic processing unit (GPU) in the system. The algorithm for laser speckle contrast analysis (LASCA) computes: the laser speckle contrast images and the laser speckle flow images. The k-mean clustering algorithm is used to classify the pixels in each frame and the clusters' centroids, which function as virtual sensors, track the displacement between different frames in time domain. The fast Fourier transform (FFT) and the frequency domain decomposition (FDD) compute the modal properties of the structure: natural frequencies and damping ratios. This study takes advantage of the large scale computational capability of GPU. The algorithm is written in Compute Unifies Device Architecture (CUDA C) that allows the processing of speckle images in real time.

  15. Implementation of cost-effective diffuse light source mechanism to reduce specular reflection and halo effects for resistor-image processing

    NASA Astrophysics Data System (ADS)

    Chen, Yung-Sheng; Wang, Jeng-Yau

    2015-09-01

    Light source plays a significant role to acquire a qualified image from objects for facilitating the image processing and pattern recognition. For objects possessing specular surface, the phenomena of reflection and halo appearing in the acquired image will increase the difficulty of information processing. Such a situation may be improved by the assistance of valuable diffuse light source. Consider reading resistor via computer vision, due to the resistor's specular reflective surface it will face with a severe non-uniform luminous intensity on image yielding a higher error rate in recognition without a well-controlled light source. A measurement system including mainly a digital microscope embedded in a replaceable diffuse cover, a ring-type LED embedded onto a small pad carrying a resistor for evaluation, and Arduino microcontrollers connected with PC, is presented in this paper. Several replaceable cost-effective diffuse covers made by paper bowl, cup and box inside pasted with white paper are presented for reducing specular reflection and halo effects and compared with a commercial diffuse some. The ring-type LED can be flexibly configured to be a full or partial lighting based on the application. For each self-made diffuse cover, a set of resistors with 4 or 5 color bands are captured via digital microscope for experiments. The signal-to-noise ratio from the segmented resistor-image is used for performance evaluation. The detected principal axis of resistor body is used for the partial LED configuration to further improve the lighting condition. Experimental results confirm that the proposed mechanism can not only evaluate the cost-effective diffuse light source but also be extended as an automatic recognition system for resistor reading.

  16. Iris recognition via plenoptic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.

    Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.

  17. Modeling and possible implementation of self-learning equivalence-convolutional neural structures for auto-encoding-decoding and clusterization of images

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Lazarev, Alexander A.; Nikitovich, Diana V.

    2017-08-01

    Self-learning equivalent-convolutional neural structures (SLECNS) for auto-coding-decoding and image clustering are discussed. The SLECNS architectures and their spatially invariant equivalent models (SI EMs) using the corresponding matrix-matrix procedures with basic operations of continuous logic and non-linear processing are proposed. These SI EMs have several advantages, such as the ability to recognize image fragments with better efficiency and strong cross correlation. The proposed clustering method of fragments with regard to their structural features is suitable not only for binary, but also color images and combines self-learning and the formation of weight clustered matrix-patterns. Its model is constructed and designed on the basis of recursively processing algorithms and to k-average method. The experimental results confirmed that larger images and 2D binary fragments with a large numbers of elements may be clustered. For the first time the possibility of generalization of these models for space invariant case is shown. The experiment for an image with dimension of 256x256 (a reference array) and fragments with dimensions of 7x7 and 21x21 for clustering is carried out. The experiments, using the software environment Mathcad, showed that the proposed method is universal, has a significant convergence, the small number of iterations is easily, displayed on the matrix structure, and confirmed its prospects. Thus, to understand the mechanisms of self-learning equivalence-convolutional clustering, accompanying her to the competitive processes in neurons, and the neural auto-encoding-decoding and recognition principles with the use of self-learning cluster patterns is very important which used the algorithm and the principles of non-linear processing of two-dimensional spatial functions of images comparison. These SIEMs can simply describe the signals processing during the all training and recognition stages and they are suitable for unipolar-coding multilevel signals. We show that the implementation of SLECNS based on known equivalentors or traditional correlators is possible if they are based on proposed equivalental two-dimensional functions of image similarity. The clustering efficiency in such models and their implementation depends on the discriminant properties of neural elements of hidden layers. Therefore, the main models and architecture parameters and characteristics depends on the applied types of non-linear processing and function used for image comparison or for adaptive-equivalental weighing of input patterns. Real model experiments in Mathcad are demonstrated, which confirm that non-linear processing on equivalent functions allows you to determine the neuron winners and adjust the weight matrix. Experimental results have shown that such models can be successfully used for auto- and hetero-associative recognition. They can also be used to explain some mechanisms known as "focus" and "competing gain-inhibition concept". The SLECNS architecture and hardware implementations of its basic nodes based on multi-channel convolvers and correlators with time integration are proposed. The parameters and performance of such architectures are estimated.

  18. Application of passive imaging polarimetry in the discrimination and detection of different color targets of identical shapes using color-blind imaging sensors

    NASA Astrophysics Data System (ADS)

    El-Saba, A. M.; Alam, M. S.; Surpanani, A.

    2006-05-01

    Important aspects of automatic pattern recognition systems are their ability to efficiently discriminate and detect proper targets with low false alarms. In this paper we extend the applications of passive imaging polarimetry to effectively discriminate and detect different color targets of identical shapes using color-blind imaging sensor. For this case of study we demonstrate that traditional color-blind polarization-insensitive imaging sensors that rely only on the spatial distribution of targets suffer from high false detection rates, especially in scenarios where multiple identical shape targets are present. On the other hand we show that color-blind polarization-sensitive imaging sensors can successfully and efficiently discriminate and detect true targets based on their color only. We highlight the main advantages of using our proposed polarization-encoded imaging sensor.

  19. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    PubMed Central

    Swartz, R. Andrew

    2013-01-01

    This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136

  20. Automated Identification of River Hydromorphological Features Using UAV High Resolution Aerial Imagery.

    PubMed

    Casado, Monica Rivas; Gonzalez, Rocio Ballesteros; Kriechbaumer, Thomas; Veal, Amanda

    2015-11-04

    European legislation is driving the development of methods for river ecosystem protection in light of concerns over water quality and ecology. Key to their success is the accurate and rapid characterisation of physical features (i.e., hydromorphology) along the river. Image pattern recognition techniques have been successfully used for this purpose. The reliability of the methodology depends on both the quality of the aerial imagery and the pattern recognition technique used. Recent studies have proved the potential of Unmanned Aerial Vehicles (UAVs) to increase the quality of the imagery by capturing high resolution photography. Similarly, Artificial Neural Networks (ANN) have been shown to be a high precision tool for automated recognition of environmental patterns. This paper presents a UAV based framework for the identification of hydromorphological features from high resolution RGB aerial imagery using a novel classification technique based on ANNs. The framework is developed for a 1.4 km river reach along the river Dee in Wales, United Kingdom. For this purpose, a Falcon 8 octocopter was used to gather 2.5 cm resolution imagery. The results show that the accuracy of the framework is above 81%, performing particularly well at recognising vegetation. These results leverage the use of UAVs for environmental policy implementation and demonstrate the potential of ANNs and RGB imagery for high precision river monitoring and river management.

Top