Sample records for eye state recognition

  1. Iris recognition in the presence of ocular disease

    PubMed Central

    Aslam, Tariq Mehmood; Tan, Shi Zhuan; Dhillon, Baljean

    2009-01-01

    Iris recognition systems are among the most accurate of all biometric technologies with immense potential for use in worldwide security applications. This study examined the effect of eye pathology on iris recognition and in particular whether eye disease could cause iris recognition systems to fail. The experiment involved a prospective cohort of 54 patients with anterior segment eye disease who were seen at the acute referral unit of the Princess Alexandra Eye Pavilion in Edinburgh. Iris camera images were obtained from patients before treatment was commenced and again at follow-up appointments after treatment had been given. The principal outcome measure was that of mathematical difference in the iris recognition templates obtained from patients' eyes before and after treatment of the eye disease. Results showed that the performance of iris recognition was remarkably resilient to most ophthalmic disease states, including corneal oedema, iridotomies (laser puncture of iris) and conjunctivitis. Problems were, however, encountered in some patients with acute inflammation of the iris (iritis/anterior uveitis). The effects of a subject developing anterior uveitis may cause current recognition systems to fail. Those developing and deploying iris recognition should be aware of the potential problems that this could cause to this key biometric technology. PMID:19324690

  2. Iris recognition in the presence of ocular disease.

    PubMed

    Aslam, Tariq Mehmood; Tan, Shi Zhuan; Dhillon, Baljean

    2009-05-06

    Iris recognition systems are among the most accurate of all biometric technologies with immense potential for use in worldwide security applications. This study examined the effect of eye pathology on iris recognition and in particular whether eye disease could cause iris recognition systems to fail. The experiment involved a prospective cohort of 54 patients with anterior segment eye disease who were seen at the acute referral unit of the Princess Alexandra Eye Pavilion in Edinburgh. Iris camera images were obtained from patients before treatment was commenced and again at follow-up appointments after treatment had been given. The principal outcome measure was that of mathematical difference in the iris recognition templates obtained from patients' eyes before and after treatment of the eye disease. Results showed that the performance of iris recognition was remarkably resilient to most ophthalmic disease states, including corneal oedema, iridotomies (laser puncture of iris) and conjunctivitis. Problems were, however, encountered in some patients with acute inflammation of the iris (iritis/anterior uveitis). The effects of a subject developing anterior uveitis may cause current recognition systems to fail. Those developing and deploying iris recognition should be aware of the potential problems that this could cause to this key biometric technology.

  3. Driver fatigue detection based on eye state.

    PubMed

    Lin, Lizong; Huang, Chao; Ni, Xiaopeng; Wang, Jiawen; Zhang, Hao; Li, Xiao; Qian, Zhiqin

    2015-01-01

    Nowadays, more and more traffic accidents occur because of driver fatigue. In order to reduce and prevent it, in this study, a calculation method using PERCLOS (percentage of eye closure time) parameter characteristics based on machine vision was developed. It determined whether a driver's eyes were in a fatigue state according to the PERCLOS value. The overall workflow solutions included face detection and tracking, detection and location of the human eye, human eye tracking, eye state recognition, and driver fatigue testing. The key aspects of the detection system incorporated the detection and location of human eyes and driver fatigue testing. The simplified method of measuring the PERCLOS value of the driver was to calculate the ratio of the eyes being open and closed with the total number of frames for a given period. If the eyes were closed more than the set threshold in the total number of frames, the system would alert the driver. Through many experiments, it was shown that besides the simple detection algorithm, the rapid computing speed, and the high detection and recognition accuracies of the system, the system was demonstrated to be in accord with the real-time requirements of a driver fatigue detection system.

  4. Can gaze avoidance explain why individuals with Asperger's syndrome can't recognise emotions from facial expressions?

    PubMed

    Sawyer, Alyssa C P; Williamson, Paul; Young, Robyn L

    2012-04-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition deficit. This explanation was investigated using a newly developed emotion and mental state recognition task. Individuals with Asperger's Syndrome were less accurate at recognising emotions and mental states, but did not show evidence of gaze avoidance compared to individuals without Asperger's Syndrome. This suggests that the way individuals with Asperger's Syndrome look at faces cannot account for the difficulty they have recognising expressions.

  5. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. United States Homeland Security and National Biometric Identification

    DTIC Science & Technology

    2002-04-09

    security number. Biometrics is the use of unique individual traits such as fingerprints, iris eye patterns, voice recognition, and facial recognition to...technology to control access onto their military bases using a Defense Manpower Management Command developed software application. FACIAL Facial recognition systems...installed facial recognition systems in conjunction with a series of 200 cameras to fight street crime and identify terrorists. The cameras, which are

  7. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    NASA Astrophysics Data System (ADS)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  8. Do Adults with High Functioning Autism or Asperger Syndrome Differ in Empathy and Emotion Recognition?

    PubMed

    Montgomery, Charlotte B; Allison, Carrie; Lai, Meng-Chuan; Cassidy, Sarah; Langdon, Peter E; Baron-Cohen, Simon

    2016-06-01

    The present study examined whether adults with high functioning autism (HFA) showed greater difficulties in (1) their self-reported ability to empathise with others and/or (2) their ability to read mental states in others' eyes than adults with Asperger syndrome (AS). The Empathy Quotient (EQ) and 'Reading the Mind in the Eyes' Test (Eyes Test) were compared in 43 adults with AS and 43 adults with HFA. No significant difference was observed on EQ score between groups, while adults with AS performed significantly better on the Eyes Test than those with HFA. This suggests that adults with HFA may need more support, particularly in mentalizing and complex emotion recognition, and raises questions about the existence of subgroups within autism spectrum conditions.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Barstow, Del R; Karakaya, Mahmut

    Iris recognition has been proven to be an accurate and reliable biometric. However, the recognition of non-ideal iris images such as off angle images is still an unsolved problem. We propose a new biometric targeted eye model and a method to reconstruct the off-axis eye to its frontal view allowing for recognition using existing methods and algorithms. This allows for existing enterprise level algorithms and approaches to be largely unmodified by using our work as a pre-processor to improve performance. In addition, we describe the `Limbus effect' and its importance for an accurate segmentation of off-axis irides. Our method usesmore » an anatomically accurate human eye model and ray-tracing techniques to compute a transformation function, which reconstructs the iris to its frontal, non-refracted state. Then, the same eye model is used to render a frontal view of the reconstructed iris. The proposed method is fully described and results from synthetic data are shown to establish an upper limit on performance improvement and establish the importance of the proposed approach over traditional linear elliptical unwrapping methods. Our results with synthetic data demonstrate the ability to perform an accurate iris recognition with an image taken as much as 70 degrees off-axis.« less

  10. Context and Spoken Word Recognition in a Novel Lexicon

    ERIC Educational Resources Information Center

    Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…

  11. Emotion Recognition in Children with Autism Spectrum Disorders: Relations to Eye Gaze and Autonomic State

    ERIC Educational Resources Information Center

    Bal, Elgiz; Harden, Emily; Lamb, Damon; Van Hecke, Amy Vaughan; Denver, John W.; Porges, Stephen W.

    2010-01-01

    Respiratory Sinus Arrhythmia (RSA), heart rate, and accuracy and latency of emotion recognition were evaluated in children with autism spectrum disorders (ASD) and typically developing children while viewing videos of faces slowly transitioning from a neutral expression to one of six basic emotions (e.g., anger, disgust, fear, happiness, sadness,…

  12. Fast cat-eye effect target recognition based on saliency extraction

    NASA Astrophysics Data System (ADS)

    Li, Li; Ren, Jianlin; Wang, Xingbin

    2015-09-01

    Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.

  13. Do Adults with High Functioning Autism or Asperger Syndrome Differ in Empathy and Emotion Recognition?

    ERIC Educational Resources Information Center

    Montgomery, Charlotte B.; Allison, Carrie; Lai, Meng-Chuan; Cassidy, Sarah; Langdon, Peter E.; Baron-Cohen, Simon

    2016-01-01

    The present study examined whether adults with high functioning autism (HFA) showed greater difficulties in (1) their self-reported ability to empathise with others and/or (2) their ability to read mental states in others' eyes than adults with Asperger syndrome (AS). The Empathy Quotient (EQ) and "Reading the Mind in the Eyes" Test…

  14. Capacities for theory of mind, metacognition, and neurocognitive function are independently related to emotional recognition in schizophrenia.

    PubMed

    Lysaker, Paul H; Leonhardt, Bethany L; Brüne, Martin; Buck, Kelly D; James, Alison; Vohs, Jenifer; Francis, Michael; Hamm, Jay A; Salvatore, Giampaolo; Ringer, Jamie M; Dimaggio, Giancarlo

    2014-09-30

    While many with schizophrenia spectrum disorders experience difficulties understanding the feelings of others, little is known about the psychological antecedents of these deficits. To explore these issues we examined whether deficits in mental state decoding, mental state reasoning and metacognitive capacity predict performance on an emotion recognition task. Participants were 115 adults with a schizophrenia spectrum disorder and 58 adults with substance use disorders but no history of a diagnosis of psychosis who completed the Eyes and Hinting Test. Metacognitive capacity was assessed using the Metacognitive Assessment Scale Abbreviated and emotion recognition was assessed using the Bell Lysaker Emotion Recognition Test. Results revealed that the schizophrenia patients performed more poorly than controls on tests of emotion recognition, mental state decoding, mental state reasoning and metacognition. Lesser capacities for mental state decoding, mental state reasoning and metacognition were all uniquely related emotion recognition within the schizophrenia group even after controlling for neurocognition and symptoms in a stepwise multiple regression. Results suggest that deficits in emotion recognition in schizophrenia may partly result from a combination of impairments in the ability to judge the cognitive and affective states of others and difficulties forming complex representations of self and others. Published by Elsevier Ireland Ltd.

  15. Tracking the truth: the effect of face familiarity on eye fixations during deception.

    PubMed

    Millen, Ailsa E; Hope, Lorraine; Hillstrom, Anne P; Vrij, Aldert

    2017-05-01

    In forensic investigations, suspects sometimes conceal recognition of a familiar person to protect co-conspirators or hide knowledge of a victim. The current experiment sought to determine whether eye fixations could be used to identify memory of known persons when lying about recognition of faces. Participants' eye movements were monitored whilst they lied and told the truth about recognition of faces that varied in familiarity (newly learned, famous celebrities, personally known). Memory detection by eye movements during recognition of personally familiar and famous celebrity faces was negligibly affected by lying, thereby demonstrating that detection of memory during lies is influenced by the prior learning of the face. By contrast, eye movements did not reveal lies robustly for newly learned faces. These findings support the use of eye movements as markers of memory during concealed recognition but also suggest caution when familiarity is only a consequence of one brief exposure.

  16. Understanding eye movements in face recognition using hidden Markov models.

    PubMed

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.

  17. Comparison of eye imaging pattern recognition using neural network

    NASA Astrophysics Data System (ADS)

    Bukhari, W. M.; Syed A., M.; Nasir, M. N. M.; Sulaima, M. F.; Yahaya, M. S.

    2015-05-01

    The beauty of eye recognition system that it is used in automatic identifying and verifies a human weather from digital images or video source. There are various behaviors of the eye such as the color of the iris, size of pupil and shape of the eye. This study represents the analysis, design and implementation of a system for recognition of eye imaging. All the eye images that had been captured from the webcam in RGB format must through several techniques before it can be input for the pattern and recognition processes. The result shows that the final value of weight and bias after complete training 6 eye images for one subject is memorized by the neural network system and be the reference value of the weight and bias for the testing part. The target classifies to 5 different types for 5 subjects. The eye images can recognize the subject based on the target that had been set earlier during the training process. When the values between new eye image and the eye image in the database are almost equal, it is considered the eye image is matched.

  18. Real-time eye tracking for the assessment of driver fatigue.

    PubMed

    Xu, Junli; Min, Jianliang; Hu, Jianfeng

    2018-04-01

    Eye-tracking is an important approach to collect evidence regarding some participants' driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants' eye state for collecting eye-movement data. These data are useful to get insights into assessing participants' fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1-2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K -nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue.

  19. Real-time eye tracking for the assessment of driver fatigue

    PubMed Central

    Xu, Junli; Min, Jianliang

    2018-01-01

    Eye-tracking is an important approach to collect evidence regarding some participants’ driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants’ eye state for collecting eye-movement data. These data are useful to get insights into assessing participants’ fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1–2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K-nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue. PMID:29750113

  20. Can Changes in Eye Movement Scanning Alter the Age-Related Deficit in Recognition Memory?

    PubMed Central

    Chan, Jessica P. K.; Kamino, Daphne; Binns, Malcolm A.; Ryan, Jennifer D.

    2011-01-01

    Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults’ recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases), through a gaze-contingent moving window (own), or a moving window which replayed the eye movements of a base participant (yoked). During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities. PMID:21687460

  1. Effects of bilateral eye movements on the retrieval of item, associative, and contextual information.

    PubMed

    Parker, Andrew; Relph, Sarah; Dagnall, Neil

    2008-01-01

    Two experiments are reported that investigate the effects of saccadic bilateral eye movements on the retrieval of item, associative, and contextual information. Experiment 1 compared the effects of bilateral versus vertical versus no eye movements on tests of item recognition, followed by remember-know responses and associative recognition. Supporting previous research, bilateral eye movements enhanced item recognition by increasing the hit rate and decreasing the false alarm rate. Analysis of remember-know responses indicated that eye movement effects were accompanied by increases in remember responses. The test of associative recognition found that bilateral eye movements increased correct responses to intact pairs and decreased false alarms to rearranged pairs. Experiment 2 assessed the effects of eye movements on the recall of intrinsic (color) and extrinsic (spatial location) context. Bilateral eye movements increased correct recall for both types of context. The results are discussed within the framework of dual-process models of memory and the possible neural underpinnings of these effects are considered.

  2. Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems

    PubMed Central

    Shinozaki, Takahiro

    2018-01-01

    Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data. PMID:29425248

  3. Eye movements during object recognition in visual agnosia.

    PubMed

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    PubMed

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  5. Effects of Bilateral Eye Movements on Gist Based False Recognition in the DRM Paradigm

    ERIC Educational Resources Information Center

    Parker, Andrew; Dagnall, Neil

    2007-01-01

    The effects of saccadic bilateral (horizontal) eye movements on gist based false recognition was investigated. Following exposure to lists of words related to a critical but non-studied word participants were asked to engage in 30s of bilateral vs. vertical vs. no eye movements. Subsequent testing of recognition memory revealed that those who…

  6. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    PubMed

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  7. Why don't men understand women? Altered neural networks for reading the language of male and female eyes.

    PubMed

    Schiffer, Boris; Pawliczek, Christina; Müller, Bernhard W; Gizewski, Elke R; Walter, Henrik

    2013-01-01

    Men are traditionally thought to have more problems in understanding women compared to understanding other men, though evidence supporting this assumption remains sparse. Recently, it has been shown, however, that meńs problems in recognizing women's emotions could be linked to difficulties in extracting the relevant information from the eye region, which remain one of the richest sources of social information for the attribution of mental states to others. To determine possible differences in the neural correlates underlying emotion recognition from female, as compared to male eyes, a modified version of the Reading the Mind in the Eyes Test in combination with functional magnetic resonance imaging (fMRI) was applied to a sample of 22 participants. We found that men actually had twice as many problems in recognizing emotions from female as compared to male eyes, and that these problems were particularly associated with a lack of activation in limbic regions of the brain (including the hippocampus and the rostral anterior cingulate cortex). Moreover, men revealed heightened activation of the right amygdala to male stimuli regardless of condition (sex vs. emotion recognition). Thus, our findings highlight the function of the amygdala in the affective component of theory of mind (ToM) and in empathy, and provide further evidence that men are substantially less able to infer mental states expressed by women, which may be accompanied by sex-specific differences in amygdala activity.

  8. Iris recognition via plenoptic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.

    Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.

  9. Transcutaneous vagus nerve stimulation (tVNS) enhances recognition of emotions in faces but not bodies.

    PubMed

    Sellaro, Roberta; de Gelder, Beatrice; Finisguerra, Alessandra; Colzato, Lorenza S

    2018-02-01

    The polyvagal theory suggests that the vagus nerve is the key phylogenetic substrate enabling optimal social interactions, a crucial aspect of which is emotion recognition. A previous study showed that the vagus nerve plays a causal role in mediating people's ability to recognize emotions based on images of the eye region. The aim of this study is to verify whether the previously reported causal link between vagal activity and emotion recognition can be generalized to situations in which emotions must be inferred from images of whole faces and bodies. To this end, we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that causes the vagus nerve to fire by the application of a mild electrical stimulation to the auricular branch of the vagus nerve, located in the anterior protuberance of the outer ear. In two separate sessions, participants received active or sham tVNS before and while performing two emotion recognition tasks, aimed at indexing their ability to recognize emotions from facial and bodily expressions. Active tVNS, compared to sham stimulation, enhanced emotion recognition for whole faces but not for bodies. Our results confirm and further extend recent observations supporting a causal relationship between vagus nerve activity and the ability to infer others' emotional state, but restrict this association to situations in which the emotional state is conveyed by the whole face and/or by salient facial cues, such as eyes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    PubMed

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  11. Factors influencing young chimpanzees' (Pan troglodytes) recognition of attention.

    PubMed

    Povinelli, D J; Eddy, T J

    1996-12-01

    By 2 1/2 years of age, human infants appear to understand how others are connected to the external world through the mental state of attention and also appear to understand the specific role that the eyes play in deploying this attention. Previous research with chimpanzees suggests that, although they track the gaze of others, they may simultaneously be unaware of the underlying state of attention behind gaze. In a series of 3 experiments, the investigators systematically explored how the presence of eyes, direct eye contact, and head orientation and movement affected young chimpanzees' choice of 2 experimenters from whom to request food. The results indicate that young chimpanzees may be selectively attached to other organisms making direct eye contact with them or engaged in postures or movements that indicate attention, even though they may not appreciate the underlying mentalistic significance of these behaviors.

  12. Enhanced iris recognition method based on multi-unit iris images

    NASA Astrophysics Data System (ADS)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the averaged equal error rate of iris recognition using the proposed method was 4.3006%, which is lower than that of other methods.

  13. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  14. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  15. Evaluation of iris recognition system for wavefront-guided laser in situ keratomileusis for myopic astigmatism.

    PubMed

    Ghosh, Sudipta; Couper, Terry A; Lamoureux, Ecosse; Jhanji, Vishal; Taylor, Hugh R; Vajpayee, Rasik B

    2008-02-01

    To evaluate the visual and refractive outcomes of wavefront-guided laser in situ keratomileusis (LASIK) using an iris recognition system for the correction of myopic astigmatism. Centre for Eye Research Australia, Melbourne Excimer Laser Research Group, and Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia. A comparative analysis of wavefront-guided LASIK was performed with an iris recognition system (iris recognition group) and without iris recognition (control group). The main parameters were uncorrected visual acuity (UCVA), best spectacle-corrected visual acuity, amount of residual cylinder, manifest spherical equivalent (SE), and the index of success using the Alpins method of astigmatism analysis 1 and 3 months postoperatively. A P value less than 0.05 was considered statistically significant. Preoperatively, the mean SE was -4.32 diopters (D) +/- 1.59 (SD) in the iris recognition group (100 eyes) and -4.55 +/- 1.87 D in the control group (98 eyes) (P = .84). At 3 months, the mean SE was -0.05 +/- 0.21 D and -0.20 +/- 0.40 D, respectively (P = .001), and an SE within +/-0.50 D of emmetropia was achieved in 92.0% and 85.7% of eyes, respectively (P = .07). At 3 months, the UCVA was 20/20 or better in 90.0% and 76.5% of eyes, respectively. A statistically significant difference in the amount of astigmatic correction was seen between the 2 groups (P = .00 and P = .01 at 1 and 3 months, respectively). The index of success was 98.0% in the iris recognition group and 81.6% in the control group (P = .03). Iris recognition software may achieve better visual and refractive outcomes in wavefront-guided LASIK for myopic astigmatism.

  16. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    NASA Astrophysics Data System (ADS)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  17. [Clinical analysis of real-time iris recognition guided LASIK with femtosecond laser flap creation for myopic astigmatism].

    PubMed

    Jie, Li-ming; Wang, Qian; Zheng, Lin

    2013-08-01

    To assess the safety, efficacy, stability and changes in cylindrical degree and axis after real-time iris recognition guided LASIK with femtosecond laser flap creation for the correction of myopic astigmatism. Retrospective case series. This observational case study comprised 136 patients (249 eyes) with myopic astigmatism in a 6-month trial. Patients were divided into 3 groups according to the pre-operative cylindrical degree: Group 1, -0.75 to -1.25 D, 106 eyes;Group 2, -1.50 to -2.25 D, 89 eyes and Group 3, -2.50 to -5.00 D, 54 eyes. They were also grouped by pre-operative astigmatism axis:Group A, with the rule astigmatism (WTRA), 156 eyes; Group B, against the rule astigmatism (ATRA), 64 eyes;Group C, oblique axis astigmatism, 29 eyes. After femtosecond laser flap created, real-time iris recognized excimer ablation was performed. The naked visual acuity, the best-corrected visual acuity, the degree and axis of astigmatism were analyzed and compared at 1, 3 and 6 months postoperatively. Static iris recognition detected that eye cyclotorsional misalignment was 2.37° ± 2.16°, dynamic iris recognition detected that the intraoperative cyclotorsional misalignment range was 0-4.3°. Six months after operation, the naked visual acuity was 0.5 or better in 100% cases. No eye lost ≥ 1 line of best spectacle-corrected visual acuity (BSCVA). Six months after operation, the naked vision of 227 eyes surpassed the BSCVA, and 87 eyes gained 1 line of BSCVA. The degree of astigmatism decreased from (-1.72 ± 0.77) D (pre-operation) to (-0.29 ± 0.25) D (post-operation). Six months after operation, WTRA from 157 eyes (pre-operation) decreased to 43 eyes (post-operation), ATRA from 63 eyes (pre-operation) decreased to 28 eyes (post-operation), oblique astigmatism increased from 29 eyes to 34 eyes and 144 eyes became non-astigmatism. The real-time iris recognition guided LASIK with femtosecond laser flap creation can compensate deviation from eye cyclotorsion, decrease iatrogenic astigmatism, and provides more precise treatment for the degree and axis of astigmatism .It is an effective and safe procedure for the treatment of myopic astigmatism.

  18. Recognition method of construction conflict based on driver's eye movement.

    PubMed

    Xu, Yi; Li, Shiwu; Gao, Song; Tan, Derong; Guo, Dong; Wang, Yuqiong

    2018-04-01

    Drivers eye movement data in simulated construction conflicts at different speeds were collected and analyzed to find the relationship between the drivers' eye movement and the construction conflict. On the basis of the relationship between the drivers' eye movement and the construction conflict, the peak point of wavelet processed pupil diameter, the first point on the left side of the peak point and the first blink point after the peak point are selected as key points for locating construction conflict periods. On the basis of the key points and the GSA, a construction conflict recognition method so called the CCFRM is proposed. And the construction conflict recognition speed and location accuracy of the CCFRM are verified. The good performance of the CCFRM verified the feasibility of proposed key points in construction conflict recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Correlations between psychometric schizotypy, scan path length, fixations on the eyes and face recognition.

    PubMed

    Hills, Peter J; Eaton, Elizabeth; Pake, J Michael

    2016-01-01

    Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.

  20. Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability.

    PubMed

    Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina

    2017-02-01

    This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory. © 2016 The British Psychological Society.

  1. Context Effects and Spoken Word Recognition of Chinese: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Yip, Michael C. W.; Zhai, Mingjun

    2018-01-01

    This study examined the time-course of context effects on spoken word recognition during Chinese sentence processing. We recruited 60 native Mandarin listeners to participate in an eye-tracking experiment. In this eye-tracking experiment, listeners were told to listen to a sentence carefully, which ended with a Chinese homophone, and look at…

  2. Cat-eye effect target recognition with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Jian, Weijian; Li, Li; Zhang, Xiaoyue

    2015-12-01

    A prototype of cat-eye effect target recognition with single-pixel detectors is proposed. Based on the framework of compressive sensing, it is possible to recognize cat-eye effect targets by projecting a series of known random patterns and measuring the backscattered light with three single-pixel detectors in different locations. The prototype only requires simpler, less expensive detectors and extends well beyond the visible spectrum. The simulations are accomplished to evaluate the feasibility of the proposed prototype. We compared our results to that obtained from conventional cat-eye effect target recognition methods using area array sensor. The experimental results show that this method is feasible and superior to the conventional method in dynamic and complicated backgrounds.

  3. Long-term visual outcomes in extremely low-birth-weight children (an American Ophthalmological Society thesis).

    PubMed

    Spencer, Rand

    2006-01-01

    The goal is to analyze the long-term visual outcome of extremely low-birth-weight children. This is a retrospective analysis of eyes of extremely low-birth-weight children on whom vision testing was performed. Visual outcomes were studied by analyzing acuity outcomes at >/=36 months of adjusted age, correlating early acuity testing with final visual outcome and evaluating adverse risk factors for vision. Data from 278 eyes are included. Mean birth weight was 731g, and mean gestational age at birth was 26 weeks. 248 eyes had grating acuity outcomes measured at 73 +/- 36 months, and 183 eyes had recognition acuity testing at 76 +/- 39 months. 54% had below normal grating acuities, and 66% had below normal recognition acuities. 27% of grating outcomes and 17% of recognition outcomes were /=3 years of age. A slower-than-normal rate of early visual development was predictive of abnormal grating acuity (P < .0001) and abnormal recognition acuity (P < .0001) at >/=3 years of age. Eyes diagnosed with maximal retinopathy of prematurity in zone I had lower acuity outcomes (P = .0002) than did those with maximal retinopathy of prematurity in zone II/III. Eyes of children born at 28 weeks gestational age. Eyes of children with poorer general health after premature birth had a 5.3 times greater risk of abnormal recognition acuity. Long-term visual development in extremely low-birth-weight infants is problematic and associated with a high risk of subnormal acuity. Early acuity testing is useful in identifying children at greatest risk for long-term visual abnormalities. Gestational age at birth of

  4. Anomalous subjective experience and psychosis risk in young depressed patients.

    PubMed

    Szily, Erika; Kéri, Szabolcs

    2009-01-01

    Help-seeking young people often display depressive symptoms. In some patients, these symptoms may co-exist with clinically high-risk mental states for psychosis. The aim of this study was to determine differences in subjective experience and social perception in young depressed patients with and without psychosis risk. Participants were 68 young persons with major depressive disorder. Twenty-six patients also met the criteria of attenuated or brief limited intermittent psychotic symptoms according to the Comprehensive Assessment of At Risk Mental States (CAARMS) criteria. Subjective experiences were assessed with the Bonn Scale for the Assessment of Basic Symptoms (BSABS). Recognition of complex social emotions and mental states was assessed using the 'Reading the Mind in the Eyes' test. Perplexity, self-disorder, and diminished affectivity significantly predicted psychosis risk. Depressed patients without psychosis risk displayed impaired recognition performance for negative social emotions, whereas patients with psychosis risk were also impaired in the recognition of cognitive expressions. In the high-risk group, self-disorder was associated with impaired recognition of facial expressions. These results suggest that anomalous subjective experience and impaired recognition of complex emotions may differentiate between young depressed patients with and without psychosis risk. 2009 S. Karger AG, Basel.

  5. Reliability of automatic biometric iris recognition after phacoemulsification or drug-induced pupil dilation.

    PubMed

    Seyeddain, Orang; Kraker, Hannes; Redlberger, Andreas; Dexl, Alois K; Grabner, Günther; Emesz, Martin

    2014-01-01

    To investigate the reliability of a biometric iris recognition system for personal authentication after cataract surgery or iatrogenic pupil dilation. This was a prospective, nonrandomized, single-center, cohort study for evaluating the performance of an iris recognition system 2-24 hours after phacoemulsification and intraocular lens implantation (group 1) and before and after iatrogenic pupil dilation (group 2). Of the 173 eyes that could be enrolled before cataract surgery, 164 (94.8%) were easily recognized postoperatively, whereas in 9 (5.2%) this was not possible. However, these 9 eyes could be reenrolled and afterwards recognized successfully. In group 2, of a total of 184 eyes that were enrolled in miosis, a total of 22 (11.9%) could not be recognized in mydriasis and therefore needed reenrollment. No single case of false-positive acceptance occurred in either group. The results of this trial indicate that standard cataract surgery seems not to be a limiting factor for iris recognition in the large majority of cases. Some patients (5.2% in this study) might need "reenrollment" after cataract surgery. Iris recognition was primarily successful in eyes with medically dilated pupils in nearly 9 out of 10 eyes. No single case of false-positive acceptance occurred in either group in this trial. It seems therefore that iris recognition is a valid biometric method in the majority of cases after cataract surgery or after pupil dilation.

  6. [Cyclorotation of the eye in wavefront-guided LASIK using a static eyetracker with iris recognition].

    PubMed

    Kohnen, T; Kühne, C; Cichocki, M; Strenger, A

    2007-01-01

    Centration of the ablation zone decisively influences the result of wavefront-guided LASIK. Cyclorotation of the eye occurs as the patient changes from the sitting position during aberrometry to the supine position during laser surgery and may lead to induction of lower and higher order aberrations. Twenty patients (40 eyes) underwent wavefront-guided LASIK (B&L 217z 100 excimer laser) with a static eyetracker driven by iris recognition (mean preoperative SE: -4.72+/-1.45 D; range: -1.63 to -7.00 D). The iris patterns of the patients' eyes were memorized during aberrometry and after flap creation. The mean absolute value of the measured cyclorotation was -1.5+/-4.2 degrees (range: -11.0 to 6.9 degrees ). The mean cyclorotation was 3.5+/-2.7 masculine (range: 0.1 to 11.0 degrees ). In 65% of all eyes cyclorotation was >2 masculine. A static eyetracker driven by iris recognition demonstrated that cyclorotation of up to 11 degrees may occur in myopic and myopic astigmatic eyes when changing from a sitting to a supine position. Use of static eyetrackers with iris recognition may provide a more precise positioning of the ablation profile as they detect and compensate cyclorotation.

  7. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    PubMed Central

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810

  8. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance

    PubMed Central

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013

  9. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  10. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance.

    PubMed

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).

  11. Surface ablation with iris recognition and dynamic rotational eye tracking-based tissue saving treatment with the Technolas 217z excimer laser.

    PubMed

    Prakash, Gaurav; Agarwal, Amar; Kumar, Dhivya Ashok; Jacob, Soosan; Agarwal, Athiya; Maity, Amrita

    2011-03-01

    To evaluate the visual and refractive outcomes and expected benefits of Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking. This prospective, interventional case series comprised 122 eyes (70 patients). Pre- and postoperative assessment included uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), refraction, and higher order aberrations. All patients underwent Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking using the Technolas 217z 100-Hz excimer platform (Technolas Perfect Vision GmbH). Follow-up was performed up to 6 months postoperatively. Theoretical benefit analysis was performed to evaluate the algorithm's outcomes compared to others. Preoperative spherocylindrical power was sphere -3.62 ± 1.60 diopters (D) (range: 0 to -6.75 D), cylinder -1.15 ± 1.00 D (range: 0 to -3.50 D), and spherical equivalent -4.19 ± 1.60 D (range: -7.75 to -2.00 D). At 6 months, 91% (111/122) of eyes were within ± 0.50 D of attempted correction. Postoperative UDVA was comparable to preoperative CDVA at 1 month (P=.47) and progressively improved at 6 months (P=.004). Two eyes lost one line of CDVA at 6 months. Theoretical benefit analysis revealed that of 101 eyes with astigmatism, 29 would have had cyclotorsion-induced astigmatism of ≥ 10% if iris recognition and dynamic rotational eye tracking were not used. Furthermore, the mean percentage decrease in maximum depth of ablation by using the Tissue Saving Treatment was 11.8 ± 2.9% over Aspheric, 17.8 ± 6.2% over Personalized, and 18.2 ± 2.8% over Planoscan algorithms. Tissue saving surface ablation with iris recognition and dynamic rotational eye tracking was safe and effective in this series of eyes. Copyright 2011, SLACK Incorporated.

  12. Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns

    PubMed Central

    Teng, Dongdong; Chen, Dihu; Tan, Hongzhou

    2015-01-01

    The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929

  13. Eyes on crowding: crowding is preserved when responding by eye and similarly affects identity and position accuracy.

    PubMed

    Yildirim, Funda; Meyer, Vincent; Cornelissen, Frans W

    2015-02-16

    Peripheral vision guides recognition and selection of targets for eye movements. Crowding—a decline in recognition performance that occurs when a potential target is surrounded by other, similar, objects—influences peripheral object recognition. A recent model study suggests that crowding may be due to increased uncertainty about both the identity and the location of peripheral target objects, but very few studies have assessed these properties in tandem. Eye tracking can integrally provide information on both the perceived identity and the position of a target and therefore could become an important approach in crowding studies. However, recent reports suggest that around the moment of saccade preparation crowding may be significantly modified. If these effects were to generalize to regular crowding tasks, it would complicate the interpretation of results obtained with eye tracking and the comparison to results obtained using manual responses. For this reason, we first assessed whether the manner by which participants responded—manually or by eye—affected their performance. We found that neither recognition performance nor response time was affected by the response type. Hence, we conclude that crowding magnitude was preserved when observers responded by eye. In our main experiment, observers made eye movements to the location of a tilted Gabor target while we varied flanker tilt to manipulate target-flanker similarity. The results indicate that this similarly affected the accuracy of peripheral recognition and saccadic target localization. Our results inform about the importance of both location and identity uncertainty in crowding. © 2015 ARVO.

  14. Feature Selection in Classification of Eye Movements Using Electrooculography for Activity Recognition

    PubMed Central

    Mala, S.; Latha, K.

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185

  15. Feature selection in classification of eye movements using electrooculography for activity recognition.

    PubMed

    Mala, S; Latha, K

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.

  16. Fixations to the eyes aids in facial encoding; covertly attending to the eyes does not.

    PubMed

    Laidlaw, Kaitlin E W; Kingstone, Alan

    2017-02-01

    When looking at images of faces, people will often focus their fixations on the eyes. It has previously been demonstrated that the eyes convey important information that may improve later facial recognition. Whether this advantage requires that the eyes be fixated, or merely attended to covertly (i.e. while looking elsewhere), is unclear from previous work. While attending to the eyes covertly without fixating them may be sufficient, the act of using overt attention to fixate the eyes may improve the processing of important details used for later recognition. In the present study, participants were shown a series of faces and, in Experiment 1, asked to attend to them normally while avoiding looking at either the eyes or, as a control, the mouth (overt attentional avoidance condition); or in Experiment 2 fixate the center of the face while covertly attending to either the eyes or the mouth (covert attention condition). After the first phase, participants were asked to perform an old/new face recognition task. We demonstrate that a) when fixations to the eyes are avoided during initial viewing then subsequent face discrimination suffers, and b) covert attention to the eyes alone is insufficient to improve face discrimination performance. Together, these findings demonstrate that fixating the eyes provides an encoding advantage that is not availed by covert attention alone. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Eye contrast polarity is critical for face recognition by infants.

    PubMed

    Otsuka, Yumiko; Motoyoshi, Isamu; Hill, Harold C; Kobayashi, Megumi; Kanazawa, So; Yamaguchi, Masami K

    2013-07-01

    Just as faces share the same basic arrangement of features, with two eyes above a nose above a mouth, human eyes all share the same basic contrast polarity relations, with a sclera lighter than an iris and a pupil, and this is unique among primates. The current study examined whether this bright-dark relationship of sclera to iris plays a critical role in face recognition from early in development. Specifically, we tested face discrimination in 7- and 8-month-old infants while independently manipulating the contrast polarity of the eye region and of the rest of the face. This gave four face contrast polarity conditions: fully positive condition, fully negative condition, positive face with negated eyes ("negative eyes") condition, and negated face with positive eyes ("positive eyes") condition. In a familiarization and novelty preference procedure, we found that 7- and 8-month-olds could discriminate between faces only when the contrast polarity of the eyes was preserved (positive) and that this did not depend on the contrast polarity of the rest of the face. This demonstrates the critical role of eye contrast polarity for face recognition in 7- and 8-month-olds and is consistent with previous findings for adults. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Recognition of Emotion from Facial Expressions with Direct or Averted Eye Gaze and Varying Expression Intensities in Children with Autism Disorder and Typically Developing Children

    PubMed Central

    Tell, Dina; Davidson, Denise; Camras, Linda A.

    2014-01-01

    Eye gaze direction and expression intensity effects on emotion recognition in children with autism disorder and typically developing children were investigated. Children with autism disorder and typically developing children identified happy and angry expressions equally well. Children with autism disorder, however, were less accurate in identifying fear expressions across intensities and eye gaze directions. Children with autism disorder rated expressions with direct eyes, and 50% expressions, as more intense than typically developing children. A trend was also found for sad expressions, as children with autism disorder were less accurate in recognizing sadness at 100% intensity with direct eyes than typically developing children. Although the present research showed that children with autism disorder are sensitive to eye gaze direction, impairments in the recognition of fear, and possibly sadness, exist. Furthermore, children with autism disorder and typically developing children perceive the intensity of emotional expressions differently. PMID:24804098

  19. PTSD and Impaired Eye Expression Recognition: A Preliminary Study

    ERIC Educational Resources Information Center

    Schmidt, Jakob Zeuthen; Zachariae, Robert

    2009-01-01

    This preliminary study examined whether posttraumatic stress disorder (PTSD) was related to difficulties in identifying the mental states of others in a group of refugees. Sixteen Bosnian refugees, referred to treatment in an outpatient treatment center for survivors of torture and war-related trauma in Denmark (CETT), were compared to 16 non-PTSD…

  20. Fear recognition impairment in early-stage Alzheimer's disease: when focusing on the eyes region improves performance.

    PubMed

    Hot, Pascal; Klein-Koerkamp, Yanica; Borg, Céline; Richard-Mornas, Aurélie; Zsoldos, Isabella; Paignon Adeline, Adeline; Thomas Antérion, Catherine; Baciu, Monica

    2013-06-01

    A decline in the ability to identify fearful expression has been frequently reported in patients with Alzheimer's disease (AD). In patients with severe destruction of the bilateral amygdala, similar difficulties have been reduced by using an explicit visual exploration strategy focusing on gaze. The current study assessed the possibility of applying a similar strategy in AD patients to improve fear recognition. It also assessed the possibility of improving fear recognition when a visual exploration strategy induced AD patients to process the eyes region. Seventeen patients with mild AD and 34 healthy subjects (17 young adults and 17 older adults) performed a classical task of emotional identification of faces expressing happiness, anger, and fear in two conditions: The face appeared progressively from the eyes region to the periphery (eyes region condition) or it appeared as a whole (global condition). Specific impairment in identifying a fearful expression was shown in AD patients compared with older adult controls during the global condition. Fear expression recognition was significantly improved in AD patients during the eyes region condition, in which they performed similarly to older adult controls. Our results suggest that using a different strategy of face exploration, starting first with processing of the eyes region, may compensate for a fear recognition deficit in AD patients. Findings suggest that a part of this deficit could be related to visuo-perceptual impairments. Additionally, these findings suggest that the decline of fearful face recognition reported in both normal aging and in AD may result from impairment of non-amygdalar processing in both groups and impairment of amygdalar-dependent processing in AD. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    PubMed Central

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  2. Eye Movements During Everyday Behavior Predict Personality Traits.

    PubMed

    Hoppe, Sabrina; Loetscher, Tobias; Morey, Stephanie A; Bulling, Andreas

    2018-01-01

    Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements. Further analysis revealed new relations between previously neglected eye movement characteristics and personality. Our findings demonstrate a considerable influence of personality on everyday eye movement control, thereby complementing earlier studies in laboratory settings. Improving automatic recognition and interpretation of human social signals is an important endeavor, enabling innovative design of human-computer systems capable of sensing spontaneous natural user behavior to facilitate efficient interaction and personalization.

  3. Eye Movements During Everyday Behavior Predict Personality Traits

    PubMed Central

    Hoppe, Sabrina; Loetscher, Tobias; Morey, Stephanie A.; Bulling, Andreas

    2018-01-01

    Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements. Further analysis revealed new relations between previously neglected eye movement characteristics and personality. Our findings demonstrate a considerable influence of personality on everyday eye movement control, thereby complementing earlier studies in laboratory settings. Improving automatic recognition and interpretation of human social signals is an important endeavor, enabling innovative design of human–computer systems capable of sensing spontaneous natural user behavior to facilitate efficient interaction and personalization. PMID:29713270

  4. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    PubMed

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  5. Real-time color/shape-based traffic signs acquisition and recognition system

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio

    2013-02-01

    A real-time system is proposed to acquire from an automotive fish-eye CMOS camera the traffic signs, and provide their automatic recognition on the vehicle network. Differently from the state-of-the-art, in this work color-detection is addressed exploiting the HSI color space which is robust to lighting changes. Hence the first stage of the processing system implements fish-eye correction and RGB to HSI transformation. After color-based detection a noise deletion step is implemented and then, for the classification, a template-based correlation method is adopted to identify potential traffic signs, of different shapes, from acquired images. Starting from a segmented-image a matching with templates of the searched signs is carried out using a distance transform. These templates are organized hierarchically to reduce the number of operations and hence easing real-time processing for several types of traffic signs. Finally, for the recognition of the specific traffic sign, a technique based on extraction of signs characteristics and thresholding is adopted. Implemented on DSP platform the system recognizes traffic signs in less than 150 ms at a distance of about 15 meters from 640x480-pixel acquired images. Tests carried out with hundreds of images show a detection and recognition rate of about 93%.

  6. LONG-TERM VISUAL OUTCOMES IN EXTREMELY LOW-BIRTH-WEIGHT CHILDREN (AN AMERICAN OPHTHALMOLOGICAL SOCIETY THESIS)

    PubMed Central

    Spencer, Rand

    2006-01-01

    Purpose The goal is to analyze the long-term visual outcome of extremely low-birth-weight children. Methods This is a retrospective analysis of eyes of extremely low-birth-weight children on whom vision testing was performed. Visual outcomes were studied by analyzing acuity outcomes at ≥36 months of adjusted age, correlating early acuity testing with final visual outcome and evaluating adverse risk factors for vision. Results Data from 278 eyes are included. Mean birth weight was 731g, and mean gestational age at birth was 26 weeks. 248 eyes had grating acuity outcomes measured at 73 ± 36 months, and 183 eyes had recognition acuity testing at 76 ± 39 months. 54% had below normal grating acuities, and 66% had below normal recognition acuities. 27% of grating outcomes and 17% of recognition outcomes were ≤20/200. Abnormal early grating acuity testing was predictive of abnormal grating (P < .0001) and recognition (P = .0001) acuity testing at ≥3 years of age. A slower-than-normal rate of early visual development was predictive of abnormal grating acuity (P < .0001) and abnormal recognition acuity (P < .0001) at ≥3 years of age. Eyes diagnosed with maximal retinopathy of prematurity in zone I had lower acuity outcomes (P = .0002) than did those with maximal retinopathy of prematurity in zone II/III. Eyes of children born at ≤28 weeks gestational age had 4.1 times greater risk for abnormal recognition acuity than did those of children born at >28 weeks gestational age. Eyes of children with poorer general health after premature birth had a 5.3 times greater risk of abnormal recognition acuity. Conclusions Long-term visual development in extremely low-birth-weight infants is problematic and associated with a high risk of subnormal acuity. Early acuity testing is useful in identifying children at greatest risk for long-term visual abnormalities. Gestational age at birth of ≤ 28 weeks was associated with a higher risk of an abnormal long-term outcome. PMID:17471358

  7. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    PubMed

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes. © 2015 Wiley Periodicals, Inc.

  8. The relationship between eye movements and subsequent recognition: Evidence from individual differences and amnesia.

    PubMed

    Olsen, Rosanna K; Sebanayagam, Vinoja; Lee, Yunjo; Moscovitch, Morris; Grady, Cheryl L; Rosenbaum, R Shayna; Ryan, Jennifer D

    2016-12-01

    There is consistent agreement regarding the positive relationship between cumulative eye movement sampling and subsequent recognition, but the role of the hippocampus in this sampling behavior is currently unknown. It is also unclear whether the eye movement repetition effect, i.e., fewer fixations to repeated, compared to novel, stimuli, depends on explicit recognition and/or an intact hippocampal system. We investigated the relationship between cumulative sampling, the eye movement repetition effect, subsequent memory, and the hippocampal system. Eye movements were monitored in a developmental amnesic case (H.C.), whose hippocampal system is compromised, and in a group of typically developing participants while they studied single faces across multiple blocks. The faces were studied from the same viewpoint or different viewpoints and were subsequently tested with the same or different viewpoint. Our previous work suggested that hippocampal representations support explicit recognition for information that changes viewpoint across repetitions (Olsen et al., 2015). Here, examination of eye movements during encoding indicated that greater cumulative sampling was associated with better memory among controls. Increased sampling, however, was not associated with better explicit memory in H.C., suggesting that increased sampling only improves memory when the hippocampal system is intact. The magnitude of the repetition effect was not correlated with cumulative sampling, nor was it related reliably to subsequent recognition. These findings indicate that eye movements collect information that can be used to strengthen memory representations that are later available for conscious remembering, whereas eye movement repetition effects reflect a processing change due to experience that does not necessarily reflect a memory representation that is available for conscious appraisal. Lastly, H.C. demonstrated a repetition effect for fixed viewpoint faces but not for variable viewpoint faces, which suggests that repetition effects are differentially supported by neocortical and hippocampal systems, depending upon the representational nature of the underlying memory trace. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Right hemisphere advantage for social recognition in the chick.

    PubMed

    Vallortigara, G

    1992-09-01

    Recognition of familiar and unfamiliar conspecifics was studied in pair-reared chicks tested binocularly or with only one eye in use. Chicks were tested on day 3 in pairs composed of either cagemates or strangers. Social discrimination, as measured by the ratio "number of pecks at the strangers/total number of pecks" was impaired in right-eyed chicks with respect to left-eyed and binocular chicks. Male chicks showed higher levels of social pecking than females, and chicks that used both eyes showed higher pecking than monocular chicks. There were no significant differences in the total number of pecks (i.e. pecks at companions plus pecks at strangers) between right- and left-eyed chicks: the impairment in social discrimination of right-eyed chicks seemed to be due partly to a reduction in pecking at strangers and partly to an increase in pecking at companions. It is suggested that neural structures fed by the left eye (mainly located at the right hemisphere) are better at processing and/or storing of visual information which allows recognition of individual conspecifics. This may be part of a wider tendency to respond to small changes in any of a variety of intrinsic stimulus properties.

  10. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  11. Sad people are more accurate at expression identification with a smaller own-ethnicity bias than happy people.

    PubMed

    Hills, Peter J; Hill, Dominic M

    2017-07-12

    Sad individuals perform more accurately at face identity recognition (Hills, Werno, & Lewis, 2011), possibly because they scan more of the face during encoding. During expression identification tasks, sad individuals do not fixate on the eyes as much as happier individuals (Wu, Pu, Allen, & Pauli, 2012). Fixating on features other than the eyes leads to a reduced own-ethnicity bias (Hills & Lewis, 2006). This background indicates that sad individuals would not view the eyes as much as happy individuals and this would result in improved expression recognition and a reduced own-ethnicity bias. This prediction was tested using an expression identification task, with eye tracking. We demonstrate that sad-induced participants show enhanced expression recognition and a reduced own-ethnicity bias than happy-induced participants due to scanning more facial features. We conclude that mood affects eye movements and face encoding by causing a wider sampling strategy and deeper encoding of facial features diagnostic for expression identification.

  12. Oxytocin Promotes Facial Emotion Recognition and Amygdala Reactivity in Adults with Asperger Syndrome

    PubMed Central

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-01-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301

  13. Oxytocin promotes facial emotion recognition and amygdala reactivity in adults with asperger syndrome.

    PubMed

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-02-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.

  14. Computerised working memory based cognitive remediation therapy does not affect Reading the Mind in the Eyes test performance or neural activity during a Facial Emotion Recognition test in psychosis.

    PubMed

    Mothersill, David; Dillon, Rachael; Hargreaves, April; Castorina, Marco; Furey, Emilia; Fagan, Andrew J; Meaney, James F; Fitzmaurice, Brian; Hallahan, Brian; McDonald, Colm; Wykes, Til; Corvin, Aiden; Robertson, Ian H; Donohoe, Gary

    2018-05-27

    Working memory based cognitive remediation therapy (CT) for psychosis has recently been associated with broad improvements in performance on untrained tasks measuring working memory, episodic memory and IQ, and changes in associated brain regions. However, it is unclear if these improvements transfer to the domain of social cognition and neural activity related to performance on social cognitive tasks. We examined performance on the Reading the Mind in the Eyes test (Eyes test) in a large sample of participants with psychosis who underwent working memory based CT (N = 43) compared to a Control Group of participants with psychosis (N = 35). In a subset of this sample, we used functional magnetic resonance imaging (fMRI) to examine changes in neural activity during a facial emotion recognition task in participants who underwent CT (N = 15) compared to a Control Group (N = 15). No significant effects of CT were observed on Eyes test performance or on neural activity during facial emotion recognition, either at p<0.05 family-wise error, or at a p<0.001 uncorrected threshold, within a priori social cognitive regions of interest. This study suggests that working memory based CT does not significantly impact an aspect of social cognition which was measured behaviourally and neurally. It provides further evidence that deficits in the ability to decode mental state from facial expressions are dissociable from working memory deficits, and suggests that future CT programs should target social cognition in addition to working memory for the purposes of further enhancing social function. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. The role of relational binding in item memory: evidence from face recognition in a case of developmental amnesia.

    PubMed

    Olsen, Rosanna K; Lee, Yunjo; Kube, Jana; Rosenbaum, R Shayna; Grady, Cheryl L; Moscovitch, Morris; Ryan, Jennifer D

    2015-04-01

    Current theories state that the hippocampus is responsible for the formation of memory representations regarding relations, whereas extrahippocampal cortical regions support representations for single items. However, findings of impaired item memory in hippocampal amnesics suggest a more nuanced role for the hippocampus in item memory. The hippocampus may be necessary when the item elements need to be bound within and across episodes to form a lasting representation that can be used flexibly. The current investigation was designed to test this hypothesis in face recognition. H.C., an individual who developed with a compromised hippocampal system, and control participants incidentally studied individual faces that either varied in presentation viewpoint across study repetitions or remained in a fixed viewpoint across the study repetitions. Eye movements were recorded during encoding and participants then completed a surprise recognition memory test. H.C. demonstrated altered face viewing during encoding. Although the overall number of fixations made by H.C. was not significantly different from that of controls, the distribution of her viewing was primarily directed to the eye region. Critically, H.C. was significantly impaired in her ability to subsequently recognize faces studied from variable viewpoints, but demonstrated spared performance in recognizing faces she encoded from a fixed viewpoint, implicating a relationship between eye movement behavior in the service of a hippocampal binding function. These findings suggest that a compromised hippocampal system disrupts the ability to bind item features within and across study repetitions, ultimately disrupting recognition when it requires access to flexible relational representations. Copyright © 2015 the authors 0270-6474/15/355342-09$15.00/0.

  16. Human brain distinctiveness based on EEG spectral coherence connectivity.

    PubMed

    Rocca, D La; Campisi, P; Vegso, B; Cserti, P; Kozmann, G; Babiloni, F; Fallani, F De Vico

    2014-09-01

    The use of EEG biometrics, for the purpose of automatic people recognition, has received increasing attention in the recent years. Most of the current analyses rely on the extraction of features characterizing the activity of single brain regions, like power spectrum estimation, thus neglecting possible temporal dependencies between the generated EEG signals. However, important physiological information can be extracted from the way different brain regions are functionally coupled. In this study, we propose a novel approach that fuses spectral coherence-based connectivity between different brain regions as a possibly viable biometric feature. The proposed approach is tested on a large dataset of subjects (N = 108) during eyes-closed (EC) and eyes-open (EO) resting state conditions. The obtained recognition performance shows that using brain connectivity leads to higher distinctiveness with respect to power-spectrum measurements, in both the experimental conditions. Notably, a 100% recognition accuracy is obtained in EC and EO when integrating functional connectivity between regions in the frontal lobe, while a lower 97.5% is obtained in EC (96.26% in EO) when fusing power spectrum information from parieto-occipital (centro-parietal in EO) regions. Taken together, these results suggest that the functional connectivity patterns represent effective features for improving EEG-based biometric systems.

  17. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  18. Always on My Mind? Recognition of Attractive Faces May Not Depend on Attention.

    PubMed

    Silva, André; Macedo, António F; Albuquerque, Pedro B; Arantes, Joana

    2016-01-01

    Little research has examined what happens to attention and memory as a whole when humans see someone attractive. Hence, we investigated whether attractive stimuli gather more attention and are better remembered than unattractive stimuli. Participants took part in an attention task - in which matrices containing attractive and unattractive male naturalistic photographs were presented to 54 females, and measures of eye-gaze location and fixation duration using an eye-tracker were taken - followed by a recognition task. Eye-gaze was higher for the attractive stimuli compared to unattractive stimuli. Also, attractive photographs produced more hits and false recognitions than unattractive photographs which may indicate that regardless of attention allocation, attractive photographs produce more correct but also more false recognitions. We present an evolutionary explanation for this, as attending to more attractive faces but not always remembering them accurately and differentially compared with unseen attractive faces, may help females secure mates with higher reproductive value.

  19. Face recognition increases during saccade preparation.

    PubMed

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  20. Coming to grips with a "new" state of consciousness: the study of rapid-eye-movement sleep in the 1960s.

    PubMed

    Morrison, Adrian R

    2013-01-01

    The recognition of rapid-eye-movement sleep (REM) and its association with dreaming in 1953 by Aserinsky and Kleitman opened a new world to explore in the brain. Discussions at two major symposia in the early 1960s reveal that a state with characteristics resembling both wakefulness and sleep was overturning accepted views of the regulation of the two states. Participants grappled with the idea that cortical activation could occur during sleep. They struggled with picking a name that would capture the essence of REM without focusing on just one aspect of the state. Questioning whether REM in cats could be homologous with that of humans suggested an anthropocentric focus on human dreaming as the essence of the state. The need for biochemical studies was evident given that deprivation of REM caused a rebound in the amount of subsequent REM, which indicated that simple synaptic activity could not support this phenomenon.

  1. Visual scanning behavior is related to recognition performance for own- and other-age faces

    PubMed Central

    Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela

    2015-01-01

    It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056

  2. Human-Computer Interface Controlled by Horizontal Directional Eye Movements and Voluntary Blinks Using AC EOG Signals

    NASA Astrophysics Data System (ADS)

    Kajiwara, Yusuke; Murata, Hiroaki; Kimura, Haruhiko; Abe, Koji

    As a communication support tool for cases of amyotrophic lateral sclerosis (ALS), researches on eye gaze human-computer interfaces have been active. However, since voluntary and involuntary eye movements cannot be distinguished in the interfaces, their performance is still not sufficient for practical use. This paper presents a high performance human-computer interface system which unites high quality recognitions of horizontal directional eye movements and voluntary blinks. The experimental results have shown that the number of incorrect inputs is decreased by 35.1% in an existing system which equips recognitions of horizontal and vertical directional eye movements in addition to voluntary blinks and character inputs are speeded up by 17.4% from the existing system.

  3. Eye movement analysis for activity recognition using electrooculography.

    PubMed

    Bulling, Andreas; Ward, Jamie A; Gellersen, Hans; Tröster, Gerhard

    2011-04-01

    In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals-saccades, fixations, and blinks-and propose a method for assessing repetitive patterns of eye movements. We then devise 90 different features based on these characteristics and select a subset of them using minimum redundancy maximum relevance (mRMR) feature selection. We validate the method using an eight participant study in an office environment using an example set of five activity classes: copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web. We also include periods with no specific activity (the NULL class). Using a support vector machine (SVM) classifier and person-independent (leave-one-person-out) training, we obtain an average precision of 76.1 percent and recall of 70.5 percent over all classes and participants. The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.

  4. Unaware person recognition from the body when face identification fails.

    PubMed

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  5. Eye Movements to Pictures Reveal Transient Semantic Activation during Spoken Word Recognition

    ERIC Educational Resources Information Center

    Yee, Eiling; Sedivy, Julie C.

    2006-01-01

    Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an…

  6. Geometry and Gesture-Based Features from Saccadic Eye-Movement as a Biometric in Radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Tracy; Tourassi, Georgia; Yoon, Hong-Jun

    In this study, we present a novel application of sketch gesture recognition on eye-movement for biometric identification and estimating task expertise. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus for this study. Sketch gesture recognition techniques were employed to extract geometric and gesture-based features from saccadic eye-movements. Our results show that saccadic eye-movement, characterizedmore » using sketch-based features, result in more accurate models for predicting individual identity and level of expertise than more traditional eye-tracking features.« less

  7. The asymmetric distribution of informative face information during gender recognition.

    PubMed

    Hu, Fengpei; Hu, Huan; Xu, Lian; Qin, Jungang

    2013-02-01

    Recognition of the gender of a face is important in social interactions. In the current study, the distribution of informative facial information was systematically examined during gender judgment using two methods, Bubbles and Focus windows techniques. Two experiments found that the most informative information was around the eyes, followed by the mouth and nose. Other parts of the face contributed to the gender recognition but were less important. The left side of the face was used more during gender recognition in two experiments. These results show mainly areas around the eyes are used for gender judgment and demonstrate perceptual asymmetry with a normal (non-chimeric) face.

  8. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  9. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  10. Eye movements during spoken word recognition in Russian children.

    PubMed

    Sekerina, Irina A; Brooks, Patricia J

    2007-09-01

    This study explores incremental processing in spoken word recognition in Russian 5- and 6-year-olds and adults using free-viewing eye-tracking. Participants viewed scenes containing pictures of four familiar objects and clicked on a target embedded in a spoken instruction. In the cohort condition, two object names shared identical three-phoneme onsets. In the noncohort condition, all object names had unique onsets. Coarse-grain analyses of eye movements indicated that adults produced looks to the competitor on significantly more cohort trials than on noncohort trials, whereas children surprisingly failed to demonstrate cohort competition due to widespread exploratory eye movements across conditions. Fine-grain analyses, in contrast, showed a similar time course of eye movements across children and adults, but with cohort competition lingering more than 1s longer in children. The dissociation between coarse-grain and fine-grain eye movements indicates a need to consider multiple behavioral measures in making developmental comparisons in language processing.

  11. Emotion recognition in body dysmorphic disorder: application of the Reading the Mind in the Eyes Task.

    PubMed

    Buhlmann, Ulrike; Winter, Anna; Kathmann, Norbert

    2013-03-01

    Body dysmorphic disorder (BDD) is characterized by perceived appearance-related defects, often tied to aspects of the face or head (e.g., acne). Deficits in decoding emotional expressions have been examined in several psychological disorders including BDD. Previous research indicates that BDD is associated with impaired facial emotion recognition, particularly in situations that involve the BDD sufferer him/herself. The purpose of this study was to further evaluate the ability to read other people's emotions among 31 individuals with BDD, and 31 mentally healthy controls. We applied the Reading the Mind in the Eyes task, in which participants are presented with a series of pairs of eyes, one at a time, and are asked to identify the emotion that describes the stimulus best. The groups did not differ with respect to decoding other people's emotions by looking into their eyes. Findings are discussed in light of previous research examining emotion recognition in BDD. Copyright © 2013. Published by Elsevier Ltd.

  12. Dynamic Features for Iris Recognition.

    PubMed

    da Costa, R M; Gonzaga, A

    2012-08-01

    The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called "dynamic features (DFs)." This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be "fraud-proof," because these DFs can only be extracted from living irises.

  13. Always on My Mind? Recognition of Attractive Faces May Not Depend on Attention

    PubMed Central

    Silva, André; Macedo, António F.; Albuquerque, Pedro B.; Arantes, Joana

    2016-01-01

    Little research has examined what happens to attention and memory as a whole when humans see someone attractive. Hence, we investigated whether attractive stimuli gather more attention and are better remembered than unattractive stimuli. Participants took part in an attention task – in which matrices containing attractive and unattractive male naturalistic photographs were presented to 54 females, and measures of eye-gaze location and fixation duration using an eye-tracker were taken – followed by a recognition task. Eye-gaze was higher for the attractive stimuli compared to unattractive stimuli. Also, attractive photographs produced more hits and false recognitions than unattractive photographs which may indicate that regardless of attention allocation, attractive photographs produce more correct but also more false recognitions. We present an evolutionary explanation for this, as attending to more attractive faces but not always remembering them accurately and differentially compared with unseen attractive faces, may help females secure mates with higher reproductive value. PMID:26858683

  14. Aging and Emotion Recognition: Not Just a Losing Matter

    PubMed Central

    Sze, Jocelyn A.; Goodkind, Madeleine S.; Gyurak, Anett; Levenson, Robert W.

    2013-01-01

    Past studies on emotion recognition and aging have found evidence of age-related decline when emotion recognition was assessed by having participants detect single emotions depicted in static images of full or partial (e.g., eye region) faces. These tests afford good experimental control but do not capture the dynamic nature of real-world emotion recognition, which is often characterized by continuous emotional judgments and dynamic multi-modal stimuli. Research suggests that older adults often perform better under conditions that better mimic real-world social contexts. We assessed emotion recognition in young, middle-aged, and older adults using two traditional methods (single emotion judgments of static images of faces and eyes) and an additional method in which participants made continuous emotion judgments of dynamic, multi-modal stimuli (videotaped interactions between young, middle-aged, and older couples). Results revealed an age by test interaction. Largely consistent with prior research, we found some evidence that older adults performed worse than young adults when judging single emotions from images of faces (for sad and disgust faces only) and eyes (for older eyes only), with middle-aged adults falling in between. In contrast, older adults did better than young adults on the test involving continuous emotion judgments of dyadic interactions, with middle-aged adults falling in between. In tests in which target stimuli differed in age, emotion recognition was not facilitated by an age match between participant and target. These findings are discussed in terms of theoretical and methodological implications for the study of aging and emotional processing. PMID:22823183

  15. Neural network application for thermal image recognition of low-resolution objects

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Wu, Bo-Wen

    2007-02-01

    In the ever-changing situation on a battle field, accurate recognition of a distant object is critical to a commander's decision-making and the general public's safety. Efficiently distinguishing between an enemy's armoured vehicles and ordinary civilian houses under all weather conditions has become an important research topic. This study presents a system for recognizing an armoured vehicle by distinguishing marks and contours. The characteristics of 12 different shapes and 12 characters are used to explore thermal image recognition under the circumstance of long distance and low resolution. Although the recognition capability of human eyes is superior to that of artificial intelligence under normal conditions, it tends to deteriorate substantially under long-distance and low-resolution scenarios. This study presents an effective method for choosing features and processing images. The artificial neural network technique is applied to further improve the probability of accurate recognition well beyond the limit of the recognition capability of human eyes.

  16. [Electronic Device for Retinal and Iris Imaging].

    PubMed

    Drahanský, M; Kolář, R; Mňuk, T

    This paper describes design and construction of a new device for automatic capturing of eye retina and iris. This device has two possible ways of utilization - either for biometric purposes (persons recognition on the base of their eye characteristics) or for medical purposes as supporting diagnostic device. eye retina, eye iris, device, acquisition, image.

  17. Prediction of the thermal imaging minimum resolvable (circle) temperature difference with neural network application.

    PubMed

    Fang, Yi-Chin; Wu, Bo-Wen

    2008-12-01

    Thermal imaging is an important technology in both national defense and the private sector. An advantage of thermal imaging is its ability to be deployed while fully engaged in duties, not limited by weather or the brightness of indoor or outdoor conditions. However, in an outdoor environment, many factors, including atmospheric decay, target shape, great distance, fog, temperature out of range and diffraction limits can lead to bad image formation, which directly affects the accuracy of object recognition. The visual characteristics of the human eye mean that it has a much better capacity for picture recognition under normal conditions than artificial intelligence does. However, conditions of interference significantly reduce this capacity for picture recognition for instance, fatigue impairs human eyesight. Hence, psychological and physiological factors can affect the result when the human eye is adopted to measure MRTD (minimum resolvable temperature difference) and MRCTD (minimum resolvable circle temperature difference). This study explores thermal imaging recognition, and presents a method for effectively choosing the characteristic values and processing the images fully. Neural network technology is successfully applied to recognize thermal imaging and predict MRTD and MRCTD (Appendix A), exceeding thermal imaging recognition under fatigue and the limits of the human eye.

  18. Contralateral comparison of wavefront-guided LASIK surgery with iris recognition versus without iris recognition using the MEL80 Excimer laser system.

    PubMed

    Wu, Fang; Yang, Yabo; Dougherty, Paul J

    2009-05-01

    To compare outcomes in wavefront-guided LASIK performed with iris recognition software versus without iris recognition software in different eyes of the same patient. A randomised, prospective study of 104 myopic eyes of 52 patients undergoing LASIK surgery with the MEL80 excimer laser system was performed. Iris recognition software was used in one eye of each patient (study group) and not used in the other eye (control group). Higher order aberrations (HOAs), contrast sensitivity, uncorrected vision (UCV), visual acuity (VA) and corneal topography were measured and recorded pre-operatively and at one month and three months post-operatively for each eye. The mean post-operative sphere and cylinder between groups was similar, however the post-operative angles of error (AE) by refraction were significantly smaller in the study group compared to the control group both in arithmetic and absolute means (p = 0.03, p = 0.01). The mean logMAR UCV was significantly better in the study group than in the control group at one month (p = 0.01). The mean logMAR VA was significantly better in the study group than in control group at both one and three months (p = 0.01, p = 0.03). In addition, mean trefoil, total third-order aberration, total fourth-order aberration and the total scotopic root-mean-square (RMS) HOAs were significantly less in the study group than those in the control group at the third (p = 0.01, p = 0.05, p = 0.04, p = 0.02). By three months, the contrast sensitivity had recovered in both groups but the study group performed better at 2.6, 4.2 and 6.6 cpd (cycles per degree) than the control group (p = 0.01, p < 0.01, p = 0.01). LASIK performed with iris recognition results in better VA, lower mean higher-order aberrations, lower refractive post-operative angles of error and better contrast sensitivity at three months post-operatively than LASIK performed without iris recognition.

  19. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition.

    PubMed

    Van Engen, Kristin J; McLaughlin, Drew J

    2018-05-04

    Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition. Copyright © 2018. Published by Elsevier B.V.

  20. A Bayesian computational model for online character recognition and disability assessment during cursive eye writing.

    PubMed

    Diard, Julien; Rynik, Vincent; Lorenceau, Jean

    2013-01-01

    This research involves a novel apparatus, in which the user is presented with an illusion inducing visual stimulus. The user perceives illusory movement that can be followed by the eye, so that smooth pursuit eye movements can be sustained in arbitrary directions. Thus, free-flow trajectories of any shape can be traced. In other words, coupled with an eye-tracking device, this apparatus enables "eye writing," which appears to be an original object of study. We adapt a previous model of reading and writing to this context. We describe a probabilistic model called the Bayesian Action-Perception for Eye On-Line model (BAP-EOL). It encodes probabilistic knowledge about isolated letter trajectories, their size, high-frequency components of the produced trajectory, and pupil diameter. We show how Bayesian inference, in this single model, can be used to solve several tasks, like letter recognition and novelty detection (i.e., recognizing when a presented character is not part of the learned database). We are interested in the potential use of the eye writing apparatus by motor impaired patients: the final task we solve by Bayesian inference is disability assessment (i.e., measuring and tracking the evolution of motor characteristics of produced trajectories). Preliminary experimental results are presented, which illustrate the method, showing the feasibility of character recognition in the context of eye writing. We then show experimentally how a model of the unknown character can be used to detect trajectories that are likely to be new symbols, and how disability assessment can be performed by opportunistically observing characteristics of fine motor control, as letter are being traced. Experimental analyses also help identify specificities of eye writing, as compared to handwriting, and the resulting technical challenges.

  1. A Bayesian computational model for online character recognition and disability assessment during cursive eye writing

    PubMed Central

    Diard, Julien; Rynik, Vincent; Lorenceau, Jean

    2013-01-01

    This research involves a novel apparatus, in which the user is presented with an illusion inducing visual stimulus. The user perceives illusory movement that can be followed by the eye, so that smooth pursuit eye movements can be sustained in arbitrary directions. Thus, free-flow trajectories of any shape can be traced. In other words, coupled with an eye-tracking device, this apparatus enables “eye writing,” which appears to be an original object of study. We adapt a previous model of reading and writing to this context. We describe a probabilistic model called the Bayesian Action-Perception for Eye On-Line model (BAP-EOL). It encodes probabilistic knowledge about isolated letter trajectories, their size, high-frequency components of the produced trajectory, and pupil diameter. We show how Bayesian inference, in this single model, can be used to solve several tasks, like letter recognition and novelty detection (i.e., recognizing when a presented character is not part of the learned database). We are interested in the potential use of the eye writing apparatus by motor impaired patients: the final task we solve by Bayesian inference is disability assessment (i.e., measuring and tracking the evolution of motor characteristics of produced trajectories). Preliminary experimental results are presented, which illustrate the method, showing the feasibility of character recognition in the context of eye writing. We then show experimentally how a model of the unknown character can be used to detect trajectories that are likely to be new symbols, and how disability assessment can be performed by opportunistically observing characteristics of fine motor control, as letter are being traced. Experimental analyses also help identify specificities of eye writing, as compared to handwriting, and the resulting technical challenges. PMID:24273525

  2. What's good for the goose is not good for the gander: Age and gender differences in scanning emotion faces.

    PubMed

    Sullivan, Susan; Campbell, Anna; Hutton, Sam B; Ruffman, Ted

    2017-05-01

    Research indicates that older adults' (≥60 years) emotion recognition is worse than that of young adults, young and older men's emotion recognition is worse than that of young and older women (respectively), older adults' looking at mouths compared with eyes is greater than that of young adults. Nevertheless, previous research has not compared older men's and women's looking at emotion faces so the present study had two aims: (a) to examine whether the tendency to look at mouths is stronger amongst older men compared with older women and (b) to examine whether men's mouth looking correlates with better emotion recognition. We examined the emotion recognition abilities and spontaneous gaze patterns of young (n = 60) and older (n = 58) males and females as they labelled emotion faces. Older men spontaneously looked more to mouths than older women, and older men's looking at mouths correlated with their emotion recognition, whereas women's looking at eyes correlated with their emotion recognition. The findings are discussed in relation to a growing body of research suggesting both age and gender differences in response to emotional stimuli and the differential efficacy of mouth and eyes looking for men and women. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders

    PubMed Central

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227

  4. Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.

    PubMed

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.

  5. [Comparative clinical study of wavefront-guided laser in situ keratomileusis with versus without iris recognition for myopia or myopic astigmatism].

    PubMed

    Wang, Wei-qun; Zhang, Jin-song; Zhao, Xiao-jin

    2011-10-01

    To explore the postoperative visual acuity results of wavefront-guided LASIK with iris recognition for myopia or myopic astigmatism and the changes of higher-order aberrations and contrast sensitivity function (CSF). Series of prospective case studies, 158 eyes (85 cases) of myopia or myopic astigmatism were divided into two groups: one group underwent wavefront-guided LASIK with iris recognition (iris recognition group); another group underwent wavefront-guided LASIK treatment without iris recognition through the limbus maring point (non-iris recognition group). To comparative analyze the postoperative visual acuity, residual refraction, the RMS of higher-order aberrations and CSF of two groups. There was no statistical significance difference between two groups of the average uncorrected visual acuity (t = 0.039, 0.058, 0.898; P = 0.844, 0.810, 0.343), best corrected visual acuity (t = 0.320, 0.440, 1.515; P = 0.572, 0.507, 0.218), and residual refraction [spherical equivalent (t = 0.027, 0.215, 0.238; P = 0.869, 0.643, 0.626), spherical (t = 0.145, 0.117, 0.038; P = 0.704, 0.732, 0.845) and cylinder (t = 1.676, 1.936, 0.334; P = 0.195, 0.164, 0.563)] at postoperative 10 days, 1 month and 3 month. The security index of iris recognition group at postoperative 3 month was 1.06 and non-iris recognition group was 1.03; the efficacy index of iris recognition group is 1.01 and non-iris recognition group was 1.00. Postoperative 3 month iris recognition group 93.83% eyes and non-iris recognition group of 90.91% eyes spherical equivalent within ± 0.50 D (χ(2) = 0.479, P = 0.489), iris recognition group of 98.77% eyes and non-iris recognition group of 97.40% eyes spherical equivalent within ± 1.00 D (Fisher test, P = 0.613). There was no significance difference between the two groups of security, efficacy and predictability. Non-iris recognition group postoperative 1 month and postoperative 3 months 3-order order aberrations root mean square value (RMS) higher than the iris recognition group increased (t = 3.414, -2.870; P = 0.027, 0.045), in particular of coma; the general higher-order aberrations (t = 0.386, 1.132; P = 0.719, 0.321), 4-order aberrations (t = 0.808, 2.720; P = 0.464, 0.063), and 5-order aberrations (t = 0.148, -1.717; P = 0.890, 0.161) show no statistically significant difference. Three months after surgery, two groups have recovered at all spatial frequencies of CSF, iris recognition group at 3.0 c/d (t = 3.209, P = 0.002) and 6.0 c/d (t = 2.997, P = 0.004) spatial frequencies of CSF under mesopic condition was better than non-iris recognition group, glare contrast sensitivity function (GCSF) for 3.0 c/d (t = 3.423, P = 0.001) and 6.0 c/d (t = 6.986, P = 0.000) spatial frequencies under mesopic condition and 1.5 c/d (t = 9.839, P = 0.000) and 3.0 c/d (t = 7.367, P = 0.000) spatial frequencies under photopic condition in iris recognition group were better than non-iris recognition group, there were no significant difference between two groups at the other spatial frequencies. Wavefront-guided LASIK with or without iris recognition both acquired better postoperative visual acuity, but in comparison with without iris recognition, wavefront-guided LASIK with iris recognition is efficient to reduce coma and enhance contrast sensitivity of postoperative.

  6. The Role of Eyes and Mouth in the Memory of a Face

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.

    1976-01-01

    Investigates the relative importance that the eyes and mouth play in the representation in memory of a human face. Systematically applies two kinds of transformation--masking the eyes or the mouths on photographs of faces--and observes the effects on recognition. (Author/RK)

  7. Cataract influence on iris recognition performance

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Czajka, Adam; Maciejewicz, Piotr

    2014-11-01

    This paper presents the experimental study revealing weaker performance of the automatic iris recognition methods for cataract-affected eyes when compared to healthy eyes. There is little research on the topic, mostly incorporating scarce databases that are often deficient in images representing more than one illness. We built our own database, acquiring 1288 eye images of 37 patients of the Medical University of Warsaw. Those images represent several common ocular diseases, such as cataract, along with less ordinary conditions, such as iris pattern alterations derived from illness or eye trauma. Images were captured in near-infrared light (used in biometrics) and for selected cases also in visible light (used in ophthalmological diagnosis). Since cataract is a disorder that is most populated by samples in the database, in this paper we focus solely on this illness. To assess the extent of the performance deterioration we use three iris recognition methodologies (commercial and academic solutions) to calculate genuine match scores for healthy eyes and those influenced by cataract. Results show a significant degradation in iris recognition reliability manifesting by worsening the genuine scores in all three matchers used in this study (12% of genuine score increase for an academic matcher, up to 175% of genuine score increase obtained for an example commercial matcher). This increase in genuine scores affected the final false non-match rate in two matchers. To our best knowledge this is the only study of such kind that employs more than one iris matcher, and analyzes the iris image segmentation as a potential source of decreased reliability

  8. Rehabilitation of face-processing skills in an adolescent with prosopagnosia: Evaluation of an online perceptual training programme.

    PubMed

    Bate, Sarah; Bennetts, Rachel; Mole, Joseph A; Ainge, James A; Gregory, Nicola J; Bobak, Anna K; Bussunt, Amanda

    2015-01-01

    In this paper we describe the case of EM, a female adolescent who acquired prosopagnosia following encephalitis at the age of eight. Initial neuropsychological and eye-movement investigations indicated that EM had profound difficulties in face perception as well as face recognition. EM underwent 14 weeks of perceptual training in an online programme that attempted to improve her ability to make fine-grained discriminations between faces. Following training, EM's face perception skills had improved, and the effect generalised to untrained faces. Eye-movement analyses also indicated that EM spent more time viewing the inner facial features post-training. Examination of EM's face recognition skills revealed an improvement in her recognition of personally-known faces when presented in a laboratory-based test, although the same gains were not noted in her everyday experiences with these faces. In addition, EM did not improve on a test assessing the recognition of newly encoded faces. One month after training, EM had maintained the improvement on the eye-tracking test, and to a lesser extent, her performance on the familiar faces test. This pattern of findings is interpreted as promising evidence that the programme can improve face perception skills, and with some adjustments, may at least partially improve face recognition skills.

  9. Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation

    PubMed Central

    Kunert, Richard; Scheepers, Christoph

    2014-01-01

    Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition. PMID:25346708

  10. An eye on reactor and computer control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.; Knee, B.

    1992-01-01

    At ORNL computer software has been developed to make possible an improved eye-gaze measurement technology. Such an inovation could be the basis for advanced eye-gaze systems that may have applications in reactor control, software development, cognitive engineering, evaluation of displays, prediction of mental workloads, and military target recognition.

  11. Pattern of eyelid motion predictive of decision errors during drowsiness: oculomotor indices of altered states.

    PubMed

    Lobb, M L; Stern, J A

    1986-08-01

    Sequential patterns of eye and eyelid motion were identified in seven subjects performing a modified serial probe recognition task under drowsy conditions. Using simultaneous EOG and video recordings, eyelid motion was divided into components above, within, and below the pupil and the durations in sequence were recorded. A serial probe recognition task was modified to allow for distinguishing decision errors from attention errors. Decision errors were found to be more frequent following a downward shift in the gaze angle which the eyelid closing sequence was reduced from a five element to a three element sequence. The velocity of the eyelid moving over the pupil during decision errors was slow in the closing and fast in the reopening phase, while on decision correct trials it was fast in closing and slower in reopening. Due to the high variability of eyelid motion under drowsy conditions these findings were only marginally significant. When a five element blink occurred, the velocity of the lid over pupil motion component of these endogenous eye blinks was significantly faster on decision correct than on decision error trials. Furthermore, the highly variable, long duration closings associated with the decision response produced slow eye movements in the horizontal plane (SEM) which were more frequent and significantly longer in duration on decision error versus decision correct responses.

  12. Novel salicylic acid-oriented thiourea-type receptors as colorimetric chemosensor: Synthesis, characterizations and selective naked-eye recognition properties

    NASA Astrophysics Data System (ADS)

    Li, Shaowei; Cao, Xiufang; Chen, Changshui; Ke, Shaoyong

    2012-10-01

    Based on the salicylic acid backbone, three highly sensitive and selective colorimetric chemosensors with an acylthiourea binding unit have been designed, synthesized and characterized. These chemosensors have been utilized for selective recognition of fluoride anions in dry DMSO solution by typical spectroscopic titration techniques. Furthermore, the obtained chemosensors AR1-3 have shown naked-eye sensitivity for detection of biologically important fluoride ion over other anions in solution.

  13. Unification of automatic target tracking and automatic target recognition

    NASA Astrophysics Data System (ADS)

    Schachter, Bruce J.

    2014-06-01

    The subject being addressed is how an automatic target tracker (ATT) and an automatic target recognizer (ATR) can be fused together so tightly and so well that their distinctiveness becomes lost in the merger. This has historically not been the case outside of biology and a few academic papers. The biological model of ATT∪ATR arises from dynamic patterns of activity distributed across many neural circuits and structures (including retina). The information that the brain receives from the eyes is "old news" at the time that it receives it. The eyes and brain forecast a tracked object's future position, rather than relying on received retinal position. Anticipation of the next moment - building up a consistent perception - is accomplished under difficult conditions: motion (eyes, head, body, scene background, target) and processing limitations (neural noise, delays, eye jitter, distractions). Not only does the human vision system surmount these problems, but it has innate mechanisms to exploit motion in support of target detection and classification. Biological vision doesn't normally operate on snapshots. Feature extraction, detection and recognition are spatiotemporal. When vision is viewed as a spatiotemporal process, target detection, recognition, tracking, event detection and activity recognition, do not seem as distinct as they are in current ATT and ATR designs. They appear as similar mechanism taking place at varying time scales. A framework is provided for unifying ATT and ATR.

  14. The Wireless Ubiquitous Surveillance Testbed

    DTIC Science & Technology

    2003-03-01

    c. Eye Patterns.............................................................................17 d. Facial Recognition ..................................................................19...27). ...........................................98 Table F.4. Facial Recognition Products. (After: Polemi, p. 25 and BiometriTech, 15 May 2002...it applies to homeland security. C. RESEARCH TASKS The main goals of this thesis are to: • Set up the biometric sensors and facial recognition surveillance

  15. Frontal view reconstruction for iris recognition

    DOEpatents

    Santos-Villalobos, Hector J; Bolme, David S; Boehnen, Chris Bensing

    2015-02-17

    Iris recognition can be accomplished for a wide variety of eye images by correcting input images with an off-angle gaze. A variety of techniques, from limbus modeling, corneal refraction modeling, optical flows, and genetic algorithms can be used. A variety of techniques, including aspherical eye modeling, corneal refraction modeling, ray tracing, and the like can be employed. Precomputed transforms can enhance performance for use in commercial applications. With application of the technologies, images with significantly unfavorable gaze angles can be successfully recognized.

  16. ASERA: A Spectrum Eye Recognition Assistant

    NASA Astrophysics Data System (ADS)

    Yuan, Hailong; Zhang, Haotong; Zhang, Yanxia; Lei, Yajuan; Dong, Yiqiao; Zhao, Yongheng

    2018-04-01

    ASERA, ASpectrum Eye Recognition Assistant, aids in quasar spectral recognition and redshift measurement and can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). This interactive software allows users to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. ASERA is an efficient and user-friendly semi-automated toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope) and is available as a standalone Java application and as a Java applet. The software offers several functions, including wavelength and flux scale settings, zoom in and out, redshift estimation, and spectral line identification.

  17. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    NASA Astrophysics Data System (ADS)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  18. Emotion recognition deficits associated with ventromedial prefrontal cortex lesions are improved by gaze manipulation.

    PubMed

    Wolf, Richard C; Pujara, Maia; Baskaya, Mustafa K; Koenigs, Michael

    2016-09-01

    Facial emotion recognition is a critical aspect of human communication. Since abnormalities in facial emotion recognition are associated with social and affective impairment in a variety of psychiatric and neurological conditions, identifying the neural substrates and psychological processes underlying facial emotion recognition will help advance basic and translational research on social-affective function. Ventromedial prefrontal cortex (vmPFC) has recently been implicated in deploying visual attention to the eyes of emotional faces, although there is mixed evidence regarding the importance of this brain region for recognition accuracy. In the present study of neurological patients with vmPFC damage, we used an emotion recognition task with morphed facial expressions of varying intensities to determine (1) whether vmPFC is essential for emotion recognition accuracy, and (2) whether instructed attention to the eyes of faces would be sufficient to improve any accuracy deficits. We found that vmPFC lesion patients are impaired, relative to neurologically healthy adults, at recognizing moderate intensity expressions of anger and that recognition accuracy can be improved by providing instructions of where to fixate. These results suggest that vmPFC may be important for the recognition of facial emotion through a role in guiding visual attention to emotionally salient regions of faces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Eye-movement strategies in developmental prosopagnosia and "super" face recognition.

    PubMed

    Bobak, Anna K; Parris, Benjamin A; Gregory, Nicola J; Bennetts, Rachel J; Bate, Sarah

    2017-02-01

    Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition.

  20. Action and emotion recognition from point light displays: an investigation of gender differences.

    PubMed

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as 'reading' the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of 'biological motion' versus 'non-biological' (or 'scrambled' motion); or (ii) the recognition of the 'emotional state' of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the 'Reading the Mind in the Eyes Test' (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree - be related to more basic differences in processing biological motion per se.

  1. Predictive factor analysis for successful performance of iris recognition-assisted dynamic rotational eye tracking during laser in situ keratomileusis.

    PubMed

    Prakash, Gaurav; Ashok Kumar, Dhivya; Agarwal, Amar; Jacob, Soosan; Sarvanan, Yoga; Agarwal, Athiya

    2010-02-01

    To analyze the predictive factors associated with success of iris recognition and dynamic rotational eye tracking on a laser in situ keratomileusis (LASIK) platform with active assessment and correction of intraoperative cyclotorsion. Interventional case series. Two hundred seventy-five eyes of 142 consecutive candidates underwent LASIK with attempted iris recognition and dynamic rotational tracking on the Technolas 217z100 platform (Techolas Perfect Vision, St Louis, Missouri, USA) at a tertiary care ophthalmic hospital. The main outcome measures were age, gender, flap creation method (femtosecond, microkeratome, epi-LASIK), success of static rotational tracking, ablation algorithm, pulses, and depth; preablation and intraablation rotational activity were analyzed and evaluated using regression models. Preablation static iris recognition was successful in 247 eyes, without difference in flap creation methods (P = .6). Age (partial correlation, -0.16; P = .014), amount of pulses (partial correlation, 0.39; P = 1.6 x 10(-8)), and gender (P = .02) were significant predictive factors for the amount of intraoperative cyclodeviation. Tracking difficulties leading to linking the ablation with a new intraoperatively acquired iris image were more with femtosecond-assisted flaps (P = 2.8 x 10(-7)) and the amount of intraoperative cyclotorsion (P = .02). However, the number of cases having nonresolvable failure of intraoperative rotational tracking was similar in the 3 flap creation methods (P = .22). Intraoperative cyclotorsional activity depends on the age, gender, and duration of ablation (pulses delivered). Femtosecond flaps do not seem to have a disadvantage over microkeratome flaps as far as iris recognition and success of intraoperative dynamic rotational tracking is concerned. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  2. Increased deficits in emotion recognition and regulation in children and adolescents with exogenous obesity.

    PubMed

    Percinel, Ipek; Ozbaran, Burcu; Kose, Sezen; Simsek, Damla Goksen; Darcan, Sukran

    2018-03-01

    In this study we aimed to evaluate emotion recognition and emotion regulation skills of children with exogenous obesity between the ages of 11 and 18 years and compare them with healthy controls. The Schedule for Affective Disorders and Schizophrenia for School Aged Children was used for psychiatric evaluations. Emotion recognition skills were evaluated using Faces Test and Reading the Mind in the Eyes Test. The Difficulties in Emotions Regulation Scale was used for evaluating skills of emotion regulation. Children with obesity had lower scores on Faces Test and Reading the Mind in the Eyes Test, and experienced greater difficulty in emotional regulation skills. Improved understanding of emotional recognition and emotion regulation in young people with obesity may improve their social adaptation and help in the treatment of their disorder. To the best of our knowledge, this is the first study to evaluate both emotional recognition and emotion regulation functions in obese children and obese adolescents between 11 and 18 years of age.

  3. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2009-12-01

    Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.

  4. Uncovering Dangerous Cheats: How Do Avian Hosts Recognize Adult Brood Parasites?

    PubMed Central

    Trnka, Alfréd; Prokop, Pavol; Grim, Tomáš

    2012-01-01

    Background Co-evolutionary struggles between dangerous enemies (e.g., brood parasites) and their victims (hosts) lead to the emergence of sophisticated adaptations and counter-adaptations. Salient host tricks to reduce parasitism costs include, as front line defence, adult enemy discrimination. In contrast to the well studied egg stage, investigations addressing the specific cues for adult enemy recognition are rare. Previous studies have suggested barred underparts and yellow eyes may provide cues for the recognition of cuckoos Cuculus canorus by their hosts; however, no study to date has examined the role of the two cues simultaneously under a consistent experimental paradigm. Methodology/Principal Findings We modify and extend previous work using a novel experimental approach – custom-made dummies with various combinations of hypothesized recognition cues. The salient recognition cue turned out to be the yellow eye. Barred underparts, the only trait examined previously, had a statistically significant but small effect on host aggression highlighting the importance of effect size vs. statistical significance. Conclusion Relative importance of eye vs. underpart phenotypes may reflect ecological context of host-parasite interaction: yellow eyes are conspicuous from the typical direction of host arrival (from above), whereas barred underparts are poorly visible (being visually blocked by the upper part of the cuckoo's body). This visual constraint may reduce usefulness of barred underparts as a reliable recognition cue under a typical situation near host nests. We propose a novel hypothesis that recognition cues for enemy detection can vary in a context-dependent manner (e.g., depending on whether the enemy is approached from below or from above). Further we suggest a particular cue can trigger fear reactions (escape) in some hosts/populations whereas the same cue can trigger aggression (attack) in other hosts/populations depending on presence/absence of dangerous enemies that are phenotypically similar to brood parasites and costs and benefits associated with particular host responses. PMID:22624031

  5. 29 CFR 1919.27 - Unit proof tests-winches, derricks and gear accessory thereto.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., goosenecks, eye plates, eye bolts, or other attachments), shall be tested with a proof load which shall..., a qualified technical office of an accredited gear certification agency, with the recognition that...

  6. Through the eyes of the own-race bias: eye-tracking and pupillometry during face recognition.

    PubMed

    Wu, Esther Xiu Wen; Laeng, Bruno; Magnussen, Svein

    2012-01-01

    People are generally better at remembering faces of their own race than faces of a different race, and this effect is known as the own-race bias (ORB) effect. We used eye-tracking and pupillometry to investigate whether Caucasian and Asian face stimuli elicited different-looking patterns in Caucasian participants in a face-memory task. Consistent with the ORB effect, we found better recognition performance for own-race faces than other-race faces, and shorter response times. In addition, at encoding, eye movements and pupillary responses to Asian faces (i.e., the other race) were different from those to Caucasian faces (i.e., the own race). Processing of own-race faces was characterized by more active scanning, with a larger number of shorter fixations, and more frequent saccades. Moreover, pupillary diameters were larger when viewing other-race than own-race faces, suggesting a greater cognitive effort when encoding other-race faces.

  7. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    PubMed

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. NATIONAL PREPAREDNESS: Technologies to Secure Federal Buildings

    DTIC Science & Technology

    2002-04-25

    Medium, some resistance based on sensitivity of eye Facial recognition Facial features are captured and compared Dependent on lighting, positioning...two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are...an adjacent feature. Attachment I—Access Control Technologies: Biometrics Facial Recognition How the technology works

  9. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  10. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  11. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  12. Experience with compound words influences their processing: An eye movement investigation with English compound words.

    PubMed

    Juhasz, Barbara J

    2016-11-14

    Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.

  13. Iris recognition as a biometric method after cataract surgery

    PubMed Central

    Roizenblatt, Roberto; Schor, Paulo; Dante, Fabio; Roizenblatt, Jaime; Belfort, Rubens

    2004-01-01

    Background Biometric methods are security technologies, which use human characteristics for personal identification. Iris recognition systems use iris textures as unique identifiers. This paper presents an analysis of the verification of iris identities after intra-ocular procedures, when individuals were enrolled before the surgery. Methods Fifty-five eyes from fifty-five patients had their irises enrolled before a cataract surgery was performed. They had their irises verified three times before and three times after the procedure, and the Hamming (mathematical) distance of each identification trial was determined, in a controlled ideal biometric environment. The mathematical difference between the iris code before and after the surgery was also compared to a subjective evaluation of the iris anatomy alteration by an experienced surgeon. Results A correlation between visible subjective iris texture alteration and mathematical difference was verified. We found only six cases in which the eye was no more recognizable, but these eyes were later reenrolled. The main anatomical changes that were found in the new impostor eyes are described. Conclusions Cataract surgeries change iris textures in such a way that iris recognition systems, which perform mathematical comparisons of textural biometric features, are able to detect these changes and sometimes even discard a pre-enrolled iris considering it an impostor. In our study, re-enrollment proved to be a feasible procedure. PMID:14748929

  14. Iris recognition as a biometric method after cataract surgery.

    PubMed

    Roizenblatt, Roberto; Schor, Paulo; Dante, Fabio; Roizenblatt, Jaime; Belfort, Rubens

    2004-01-28

    Biometric methods are security technologies, which use human characteristics for personal identification. Iris recognition systems use iris textures as unique identifiers. This paper presents an analysis of the verification of iris identities after intra-ocular procedures, when individuals were enrolled before the surgery. Fifty-five eyes from fifty-five patients had their irises enrolled before a cataract surgery was performed. They had their irises verified three times before and three times after the procedure, and the Hamming (mathematical) distance of each identification trial was determined, in a controlled ideal biometric environment. The mathematical difference between the iris code before and after the surgery was also compared to a subjective evaluation of the iris anatomy alteration by an experienced surgeon. A correlation between visible subjective iris texture alteration and mathematical difference was verified. We found only six cases in which the eye was no more recognizable, but these eyes were later reenrolled. The main anatomical changes that were found in the new impostor eyes are described. Cataract surgeries change iris textures in such a way that iris recognition systems, which perform mathematical comparisons of textural biometric features, are able to detect these changes and sometimes even discard a pre-enrolled iris considering it an impostor. In our study, re-enrollment proved to be a feasible procedure.

  15. Islamic Headdress Influences How Emotion is Recognized from the Eyes

    PubMed Central

    Kret, Mariska Esther; de Gelder, Beatrice

    2012-01-01

    Previous research has shown a negative bias in the perception of whole facial expressions from out-group members. Whether or not emotion recognition from the eyes is already sensitive to contextual information is presently a matter of debate. In three experiments we tested whether emotions can be recognized when just the eyes are visible and whether this recognition is affected by context cues, such as various Islamic headdresses vs. a cap or a scarf. Our results indicate that fear is still well recognized from a briefly flashed (100 ms) image of a women wearing a burqa with less than 20% transparency of the eye region. Moreover, the type of headdress influences how emotions are recognized. In a group of participants from non-Islamic background, fear was recognized better from women wearing a niqāb than from women wearing a cap and a shawl, whereas the opposite was observed for happy and sad expressions. The response patterns showed that fearful and anger labels were more often attributed to women with a niqāb vs. a cap and a shawl and again, an opposite pattern was observed for the happy response. However, there was no general response bias: both correct and incorrect responses were influenced by the facial expression as well. Anxiety levels and/or explicit negative associations with the Islam as measured via questionnaires did not mediate the effects. Consistent with the face literature, we conclude that the recognition of emotions from the eyes is also influenced by context. PMID:22557983

  16. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia.

    PubMed

    Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto

    2017-08-01

    Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.

  17. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: does a cultural difference truly exist?

    PubMed

    Evans, Kris; Rotello, Caren M; Li, Xingshan; Rayner, Keith

    2009-02-01

    Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.

  18. Loneliness and the social monitoring system: Emotion recognition and eye gaze in a real-life conversation.

    PubMed

    Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike

    2016-02-01

    Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed. © 2015 The British Psychological Society.

  19. Multivariate fMRI and Eye Tracking Reveal Differential Effects of Visual Interference on Recognition Memory Judgments for Objects and Scenes.

    PubMed

    O'Neil, Edward B; Watson, Hilary C; Dhillon, Sonya; Lobaugh, Nancy J; Lee, Andy C H

    2015-09-01

    Recent work has demonstrated that the perirhinal cortex (PRC) supports conjunctive object representations that aid object recognition memory following visual object interference. It is unclear, however, how these representations interact with other brain regions implicated in mnemonic retrieval and how congruent and incongruent interference influences the processing of targets and foils during object recognition. To address this, multivariate partial least squares was applied to fMRI data acquired during an interference match-to-sample task, in which participants made object or scene recognition judgments after object or scene interference. This revealed a pattern of activity sensitive to object recognition following congruent (i.e., object) interference that included PRC, prefrontal, and parietal regions. Moreover, functional connectivity analysis revealed a common pattern of PRC connectivity across interference and recognition conditions. Examination of eye movements during the same task in a separate study revealed that participants gazed more at targets than foils during correct object recognition decisions, regardless of interference congruency. By contrast, participants viewed foils more than targets for incorrect object memory judgments, but only after congruent interference. Our findings suggest that congruent interference makes object foils appear familiar and that a network of regions, including PRC, is recruited to overcome the effects of interference.

  20. Modulations of eye movement patterns by spatial filtering during the learning and testing phases of an old/new face recognition task.

    PubMed

    Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A

    2015-02-01

    In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈<5 cpf) or high-band (≈>20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.

  1. Personal identification by eyes.

    PubMed

    Marinović, Dunja; Njirić, Sanja; Coklo, Miran; Muzić, Vedrana

    2011-09-01

    Identification of persons through the eyes is in the field of biometrical science. Many security systems are based on biometric methods of personal identification, to determine whether a person is presenting itself truly. The human eye contains an extremely large number of individual characteristics that make it particularly suitable for the process of identifying a person. Today, the eye is considered to be one of the most reliable body parts for human identification. Systems using iris recognition are among the most secure biometric systems.

  2. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    PubMed

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. An Investigation of Emotion Recognition and Theory of Mind in People with Chronic Heart Failure

    PubMed Central

    Habota, Tina; McLennan, Skye N.; Cameron, Jan; Ski, Chantal F.; Thompson, David R.; Rendell, Peter G.

    2015-01-01

    Objectives Cognitive deficits are common in patients with chronic heart failure (CHF), but no study has investigated whether these deficits extend to social cognition. The present study provided the first empirical assessment of emotion recognition and theory of mind (ToM) in patients with CHF. In addition, it assessed whether each of these social cognitive constructs was associated with more general cognitive impairment. Methods A group comparison design was used, with 31 CHF patients compared to 38 demographically matched controls. The Ekman Faces test was used to assess emotion recognition, and the Mind in the Eyes test to measure ToM. Measures assessing global cognition, executive functions, and verbal memory were also administered. Results There were no differences between groups on emotion recognition or ToM. The CHF group’s performance was poorer on some executive measures, but memory was relatively preserved. In the CHF group, both emotion recognition performance and ToM ability correlated moderately with global cognition (r = .38, p = .034; r = .49, p = .005, respectively), but not with executive function or verbal memory. Conclusion CHF patients with lower cognitive ability were more likely to have difficulty recognizing emotions and inferring the mental states of others. Clinical implications of these findings are discussed. PMID:26529409

  4. Micrometer-level naked-eye detection of caesium particulates in the solid state

    NASA Astrophysics Data System (ADS)

    Mori, Taizo; Akamatsu, Masaaki; Okamoto, Ken; Sumita, Masato; Tateyama, Yoshitaka; Sakai, Hideki; Hill, Jonathan P.; Abe, Masahiko; Ariga, Katsuhiko

    2013-02-01

    Large amounts of radioactive material were released from the Fukushima Daiichi nuclear plant in Japan, contaminating the local environment. During the early stages of such nuclear accidents, iodine I-131 (half-life 8.02 d) is usually detectable in the surrounding atmosphere and bodies of water. On the other hand, in the long-term, soil and water contamination by Cs-137, which has a half-life of 30.17 years, is a serious problem. In Japan, the government is planning and carrying out radioactive decontamination operations not only with public agencies but also non-governmental organizations, making radiation measurements within Japan. If caesium (also radiocaesium) could be detected by the naked eye then its environmental remediation would be facilitated. Supramolecular material approaches, such as host-guest chemistry, are useful in the design of high-resolution molecular sensors and can be used to convert molecular-recognition processes into optical signals. In this work, we have developed molecular materials (here, phenols) as an optical probe for caesium cation-containing particles with implementation based on simple spray-on reagents and a commonly available fluorescent lamp for naked-eye detection in the solid state. This chemical optical probe provides a higher spatial resolution than existing radioscopes and gamma-ray cameras.

  5. The effect of inversion on face recognition in adults with autism spectrum disorder.

    PubMed

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-05-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD performed worse than controls on the recognition task but did not show an advantage for inverted face recognition. Both groups directed more visual attention to the eye than the mouth region and gaze patterns were not found to be associated with recognition performance. These results provide evidence of a normal effect of inversion on face recognition in adults with ASD.

  6. Disconnection mechanism and regional cortical atrophy contribute to impaired processing of facial expressions and theory of mind in multiple sclerosis: a structural MRI study.

    PubMed

    Mike, Andrea; Strammer, Erzsebet; Aradi, Mihaly; Orsi, Gergely; Perlaki, Gabor; Hajnal, Andras; Sandor, Janos; Banati, Miklos; Illes, Eniko; Zaitsev, Alexander; Herold, Robert; Guttmann, Charles R G; Illes, Zsolt

    2013-01-01

    Successful socialization requires the ability of understanding of others' mental states. This ability called as mentalization (Theory of Mind) may become deficient and contribute to everyday life difficulties in multiple sclerosis. We aimed to explore the impact of brain pathology on mentalization performance in multiple sclerosis. Mentalization performance of 49 patients with multiple sclerosis was compared to 24 age- and gender matched healthy controls. T1- and T2-weighted three-dimensional brain MRI images were acquired at 3Tesla from patients with multiple sclerosis and 18 gender- and age matched healthy controls. We assessed overall brain cortical thickness in patients with multiple sclerosis and the scanned healthy controls, and measured the total and regional T1 and T2 white matter lesion volumes in patients with multiple sclerosis. Performances in tests of recognition of mental states and emotions from facial expressions and eye gazes correlated with both total T1-lesion load and regional T1-lesion load of association fiber tracts interconnecting cortical regions related to visual and emotion processing (genu and splenium of corpus callosum, right inferior longitudinal fasciculus, right inferior fronto-occipital fasciculus, uncinate fasciculus). Both of these tests showed correlations with specific cortical areas involved in emotion recognition from facial expressions (right and left fusiform face area, frontal eye filed), processing of emotions (right entorhinal cortex) and socially relevant information (left temporal pole). Thus, both disconnection mechanism due to white matter lesions and cortical thinning of specific brain areas may result in cognitive deficit in multiple sclerosis affecting emotion and mental state processing from facial expressions and contributing to everyday and social life difficulties of these patients.

  7. Eye-Movement Parameters and Reading Speed.

    ERIC Educational Resources Information Center

    Sovik, Nils; Arntzen, Oddvar; Samuelstuen, Marit

    2000-01-01

    Addresses the relationship between four eye movement parameters and reading speed of 20 twelve-year-old children during silent and oral reading. Predicts reading speed by the following variables: recognition span, average fixation duration, and number of regressive saccades. Indicates that in terms of reading speed, significant interrelationships…

  8. Oncologists' non-verbal behavior and analog patients' recall of information.

    PubMed

    Hillen, Marij A; de Haes, Hanneke C J M; van Tienhoven, Geertjan; van Laarhoven, Hanneke W M; van Weert, Julia C M; Vermeulen, Daniëlle M; Smets, Ellen M A

    2016-06-01

    Background Information in oncological consultations is often excessive. Those patients who better recall information are more satisfied, less anxious and more adherent. Optimal recall may be enhanced by the oncologist's non-verbal communication. We tested the influence of three non-verbal behaviors, i.e. eye contact, body posture and smiling, on patients' recall of information and perceived friendliness of the oncologist. Moreover, the influence of patient characteristics on recall was examined, both directly or as a moderator of non-verbal communication. Material and methods Non-verbal communication of an oncologist was experimentally varied using video vignettes. In total 194 breast cancer patients/survivors and healthy women participated as 'analog patients', viewing a randomly selected video version while imagining themselves in the role of the patient. Directly after viewing, they evaluated the oncologist. From 24 to 48 hours later, participants' passive recall, i.e. recognition, and free recall of information provided by the oncologist were assessed. Results Participants' recognition was higher if the oncologist maintained more consistent eye contact (β = 0.17). More eye contact and smiling led to a perception of the oncologist as more friendly. Body posture and smiling did not significantly influence recall. Older age predicted significantly worse recognition (β = -0.28) and free recall (β = -0.34) of information. Conclusion Oncologists may be able to facilitate their patients' recall functioning through consistent eye contact. This seems particularly relevant for older patients, whose recall is significantly worse. These findings can be used in training, focused on how to maintain eye contact while managing computer tasks.

  9. Age Deficits in Facial Affect Recognition: The Influence of Dynamic Cues.

    PubMed

    Grainger, Sarah A; Henry, Julie D; Phillips, Louise H; Vanman, Eric J; Allen, Roy

    2017-07-01

    Older adults have difficulties in identifying most facial expressions of emotion. However, most aging studies have presented static photographs of intense expressions, whereas in everyday experience people see emotions that develop and change. The present study was designed to assess whether age-related difficulties with emotion recognition are reduced when more ecologically valid (i.e., dynamic) stimuli are used. We examined the effect of stimuli format (i.e., static vs. dynamic) on facial affect recognition in two separate studies that included independent samples and distinct stimuli sets. In addition to younger and older participants, a middle-aged group was included in Study 1 and eye gaze patterns were assessed in Study 2. Across both studies, older adults performed worse than younger adults on measures of facial affect recognition. In Study 1, older and-middle aged adults benefited from dynamic stimuli, but only when the emotional displays were subtle. Younger adults gazed more at the eye region of the face relative to older adults (Study 2), but dynamic presentation increased attention towards the eye region for younger adults only. Together, these studies provide important and novel insights into the specific circumstances in which older adults may be expected to experience difficulties in perceiving facial emotions. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. The living eye "disarms" uncommitted autoreactive T cells by converting them to Foxp3(+) regulatory cells following local antigen recognition.

    PubMed

    Zhou, Ru; Horai, Reiko; Silver, Phyllis B; Mattapallil, Mary J; Zárate-Bladés, Carlos R; Chong, Wai Po; Chen, Jun; Rigden, Rachael C; Villasmil, Rafael; Caspi, Rachel R

    2012-02-15

    Immune privilege is used by the eye, brain, reproductive organs, and gut to preserve structural and functional integrity in the face of inflammation. The eye is arguably the most vulnerable and, therefore, also the most "privileged" of tissues; paradoxically, it remains subject to destructive autoimmunity. It has been proposed, although never proven in vivo, that the eye can induce T regulatory cells (Tregs) locally. Using Foxp3-GFP reporter mice expressing a retina-specific TCR, we now show that uncommitted T cells rapidly convert in the living eye to Foxp3(+) Tregs in a process involving retinal Ag recognition, de novo Foxp3 induction, and proliferation. This takes place within the ocular tissue and is supported by retinoic acid, which is normally present in the eye because of its function in the chemistry of vision. Nonconverted T cells showed evidence of priming but appeared restricted from expressing effector function in the eye. Pre-existing ocular inflammation impeded conversion of uncommitted T cells into Tregs. Importantly, retina-specific T cells primed in vivo before introduction into the eye were resistant to Treg conversion in the ocular environment and, instead, caused severe uveitis. Thus, uncommitted T cells can be disarmed, but immune privilege is unable to protect from uveitogenic T cells that have acquired effector function prior to entering the eye. These findings shed new light on the phenomenon of immune privilege and on its role, as well as its limitations, in actively controlling immune responses in the tissue.

  11. The Living Eye “Disarms” Uncommitted Autoreactive T Cells by Converting Them to FoxP3+ Regulatory Cells Following Local Antigen Recognition

    PubMed Central

    Zhou, Ru; Horai, Reiko; Silver, Phyllis B; Mattapallil, Mary J; Zárate-Bladés, Carlos R; Chong, Wai Po; Chen, Jun; Rigden, Rachael C; Villasmil, Rafael; Caspi, Rachel R

    2011-01-01

    Immune privilege is used by the eye, brain, reproductive organs and gut to preserve structural and functional integrity in the face of inflammation. The eye is arguably the most vulnerable, and therefore also the most “privileged” of tissues, but paradoxically, remains subject to destructive autoimmunity. It has been proposed, although never proven in vivo, that the eye can induce T regulatory cells (Tregs) locally. Using FoxP3-GFP reporter mice expressing a retina-specific T cell receptor, we now show that uncommitted T cells rapidly convert in the living eye to FoxP3+ Tregs in a process involving retinal antigen recognition, de novo FoxP3 induction and proliferation. This takes place within the ocular tissue and is supported by retinoic acid, which is normally present in the eye due to its function in the chemistry of vision. Non-converted T cells showed evidence of priming, but appeared restricted from expressing effector function in the eye. Preexisting ocular inflammation impeded conversion of uncommitted T cells into Tregs. Importantly, retina-specific T cells primed in vivo before introduction into the eye were resistant to Treg conversion in the ocular environment, and instead caused severe uveitis. Thus, uncommitted T cells can be disarmed, but immune privilege is unable to protect from uveitogenic T cells that have acquired effector function prior to entering the eye. These findings shed new light on the phenomenon of immune privilege and on its role, as well as its limitations, in actively controlling immune responses in the tissue. PMID:22238462

  12. Improved Open-Microphone Speech Recognition

    NASA Astrophysics Data System (ADS)

    Abrash, Victor

    2002-12-01

    Many current and future NASA missions make extreme demands on mission personnel both in terms of work load and in performing under difficult environmental conditions. In situations where hands are impeded or needed for other tasks, eyes are busy attending to the environment, or tasks are sufficiently complex that ease of use of the interface becomes critical, spoken natural language dialog systems offer unique input and output modalities that can improve efficiency and safety. They also offer new capabilities that would not otherwise be available. For example, many NASA applications require astronauts to use computers in micro-gravity or while wearing space suits. Under these circumstances, command and control systems that allow users to issue commands or enter data in hands-and eyes-busy situations become critical. Speech recognition technology designed for current commercial applications limits the performance of the open-ended state-of-the-art dialog systems being developed at NASA. For example, today's recognition systems typically listen to user input only during short segments of the dialog, and user input outside of these short time windows is lost. Mistakes detecting the start and end times of user utterances can lead to mistakes in the recognition output, and the dialog system as a whole has no way to recover from this, or any other, recognition error. Systems also often require the user to signal when that user is going to speak, which is impractical in a hands-free environment, or only allow a system-initiated dialog requiring the user to speak immediately following a system prompt. In this project, SRI has developed software to enable speech recognition in a hands-free, open-microphone environment, eliminating the need for a push-to-talk button or other signaling mechanism. The software continuously captures a user's speech and makes it available to one or more recognizers. By constantly monitoring and storing the audio stream, it provides the spoken dialog manager extra flexibility to recognize the signal with no audio gaps between recognition requests, as well as to rerecognize portions of the signal, or to rerecognize speech with different grammars, acoustic models, recognizers, start times, and so on. SRI expects that this new open-mic functionality will enable NASA to develop better error-correction mechanisms for spoken dialog systems, and may also enable new interaction strategies.

  13. Improved Open-Microphone Speech Recognition

    NASA Technical Reports Server (NTRS)

    Abrash, Victor

    2002-01-01

    Many current and future NASA missions make extreme demands on mission personnel both in terms of work load and in performing under difficult environmental conditions. In situations where hands are impeded or needed for other tasks, eyes are busy attending to the environment, or tasks are sufficiently complex that ease of use of the interface becomes critical, spoken natural language dialog systems offer unique input and output modalities that can improve efficiency and safety. They also offer new capabilities that would not otherwise be available. For example, many NASA applications require astronauts to use computers in micro-gravity or while wearing space suits. Under these circumstances, command and control systems that allow users to issue commands or enter data in hands-and eyes-busy situations become critical. Speech recognition technology designed for current commercial applications limits the performance of the open-ended state-of-the-art dialog systems being developed at NASA. For example, today's recognition systems typically listen to user input only during short segments of the dialog, and user input outside of these short time windows is lost. Mistakes detecting the start and end times of user utterances can lead to mistakes in the recognition output, and the dialog system as a whole has no way to recover from this, or any other, recognition error. Systems also often require the user to signal when that user is going to speak, which is impractical in a hands-free environment, or only allow a system-initiated dialog requiring the user to speak immediately following a system prompt. In this project, SRI has developed software to enable speech recognition in a hands-free, open-microphone environment, eliminating the need for a push-to-talk button or other signaling mechanism. The software continuously captures a user's speech and makes it available to one or more recognizers. By constantly monitoring and storing the audio stream, it provides the spoken dialog manager extra flexibility to recognize the signal with no audio gaps between recognition requests, as well as to rerecognize portions of the signal, or to rerecognize speech with different grammars, acoustic models, recognizers, start times, and so on. SRI expects that this new open-mic functionality will enable NASA to develop better error-correction mechanisms for spoken dialog systems, and may also enable new interaction strategies.

  14. A light-up probe with aggregation-induced emission characteristics (AIE) for selective imaging, naked-eye detection and photodynamic killing of Gram-positive bacteria.

    PubMed

    Feng, Guangxue; Yuan, Youyong; Fang, Hu; Zhang, Ruoyu; Xing, Bengang; Zhang, Guanxin; Zhang, Deqing; Liu, Bin

    2015-08-11

    We report the design and synthesis of a red fluorescent AIE light-up probe for selective recognition, naked-eye detection, and image-guided photodynamic killing of Gram-positive bacteria, including vancomycin-resistant Enterococcus strains.

  15. Lexical Processes in the Recognition of Japanese Horizontal and Vertical Compounds

    ERIC Educational Resources Information Center

    Miwa, Koji; Dijkstra, Ton

    2017-01-01

    This lexical decision eye-tracking study investigated whether horizontal and vertical readings elicit comparable behavioral patterns and whether reading directions modulate lexical processes. Response times and eye movements were recorded during a lexical decision task with Japanese bimorphemic compound words presented vertically. The data were…

  16. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: Does a cultural difference truly exist?

    PubMed Central

    Evans, Kris; Rotello, Caren M.; Li, Xingshan; Rayner, Keith

    2009-01-01

    Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory. PMID:18785074

  17. Social Experience Does Not Abolish Cultural Diversity in Eye Movements

    PubMed Central

    Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto

    2011-01-01

    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626

  18. Eye-tracking the own-race bias in face recognition: revealing the perceptual and socio-cognitive mechanisms.

    PubMed

    Hills, Peter J; Pake, J Michael

    2013-12-01

    Own-race faces are recognised more accurately than other-race faces and may even be viewed differently as measured by an eye-tracker (Goldinger, Papesh, & He, 2009). Alternatively, observer race might direct eye-movements (Blais, Jack, Scheepers, Fiset, & Caldara, 2008). Observer differences in eye-movements are likely to be based on experience of the physiognomic characteristics that are differentially discriminating for Black and White faces. Two experiments are reported that employed standard old/new recognition paradigms in which Black and White observers viewed Black and White faces with their eye-movements recorded. Experiment 1 showed that there were observer race differences in terms of the features scanned but observers employed the same strategy across different types of faces. Experiment 2 demonstrated that other-race faces could be recognised more accurately if participants had their first fixation directed to more diagnostic features using fixation crosses. These results are entirely consistent with those presented by Blais et al. (2008) and with the perceptual interpretation that the own-race bias is due to inappropriate attention allocated to the facial features (Hills & Lewis, 2006, 2011). Copyright © 2013 Elsevier B.V. All rights reserved.

  19. The Two-Systems Account of Theory of Mind: Testing the Links to Social- Perceptual and Cognitive Abilities

    PubMed Central

    Meinhardt-Injac, Bozana; Daum, Moritz M.; Meinhardt, Günter; Persike, Malte

    2018-01-01

    According to the two-systems account of theory of mind (ToM), understanding mental states of others involves both fast social-perceptual processes, as well as slower, reflexive cognitive operations (Frith and Frith, 2008; Apperly and Butterfill, 2009). To test the respective roles of specific abilities in either of these processes we administered 15 experimental procedures to a large sample of 343 participants, testing ability in face recognition and holistic perception, language, and reasoning. ToM was measured by a set of tasks requiring ability to track and to infer complex emotional and mental states of others from faces, eyes, spoken language, and prosody. We used structural equation modeling to test the relative strengths of a social-perceptual (face processing related) and reflexive-cognitive (language and reasoning related) path in predicting ToM ability. The two paths accounted for 58% of ToM variance, thus validating a general two-systems framework. Testing specific predictor paths revealed language and face recognition as strong and significant predictors of ToM. For reasoning, there were neither direct nor mediated effects, albeit reasoning was strongly associated with language. Holistic face perception also failed to show a direct link with ToM ability, while there was a mediated effect via face recognition. These results highlight the respective roles of face recognition and language for the social brain, and contribute closer empirical specification of the general two-systems account. PMID:29445336

  20. Lexical Competition in Non-Native Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Weber, Andrea; Cutler, Anne

    2004-01-01

    Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…

  1. The Ontogeny of Face Recognition: Eye Contact and Sweet Taste Induce Face Preference in 9- and 12-Week-Old Human Infants.

    ERIC Educational Resources Information Center

    Blass, Elliott M.; Camp, Carole A.

    2001-01-01

    Calm or crying 9- and 12-week-olds sat facing a researcher who gazed into their eyes or at their forehead and delivered either a sucrose solution or pacifier or delivered nothing. Found that combining sweet taste and eye contact was necessary and sufficient for calm 9- and 12-week-olds to form a preference for the researcher, but not for crying…

  2. Dry Eye: an Inflammatory Ocular Disease

    PubMed Central

    Hessen, Michelle; Akpek, Esen Karamursel

    2014-01-01

    Keratoconjunctivitis sicca, or dry eye, is a common ocular disease prompting millions of individuals to seek ophthalmological care. Regardless of the underlying etiology, dry eye has been shown to be associated with abnormalities in the pre-corneal tear film and subsequent inflammatory changes in the entire ocular surface including the adnexa, conjunctiva and cornea. Since the recognition of the role of inflammation in dry eye, a number of novel treatments have been investigated designed to inhibit various inflammatory pathways. Current medications that are used, including cyclosporine A, corticosteroids, tacrolimus, tetracycline derivatives and autologous serum, have been effective for management of dry eye and lead to measurable clinical improvement. PMID:25279127

  3. Perceiving and Remembering Events Cross-Linguistically: Evidence from Dual-Task Paradigms

    ERIC Educational Resources Information Center

    Trueswell, John C.; Papafragou, Anna

    2010-01-01

    What role does language play during attention allocation in perceiving and remembering events? We recorded adults' eye movements as they studied animated motion events for a later recognition task. We compared native speakers of two languages that use different means of expressing motion (Greek and English). In Experiment 1, eye movements revealed…

  4. Revisiting Huey: on the importance of the upper part of words during reading.

    PubMed

    Perea, Manuel

    2012-12-01

    Recent research has shown that that the upper part of words enjoys an advantage over the lower part of words in the recognition of isolated words. The goal of the present article was to examine how removing the upper/lower part of the words influences eye movement control during silent normal reading. The participants' eye movements were monitored when reading intact sentences and when reading sentences in which the upper or the lower portion of the text was deleted. Results showed a greater reading cost (longer fixations) when the upper part of the text was removed than when the lower part of the text was removed (i.e., it influenced when to move the eyes). However, there was little influence on the initial landing position on a target word (i.e., on the decision as to where to move the eyes). In addition, lexical-processing difficulty (as inferred from the magnitude of the word frequency effect on a target word) was affected by text degradation. The implications of these findings for models of visual-word recognition and reading are discussed.

  5. Predicting the Valence of a Scene from Observers’ Eye Movements

    PubMed Central

    R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  6. Differences between Dyslexic and Non-Dyslexic Children in the Performance of Phonological Visual-Auditory Recognition Tasks: An Eye-Tracking Study

    PubMed Central

    Tiadi, Aimé; Seassau, Magali; Gerard, Christophe-Loïc; Bucci, Maria Pia

    2016-01-01

    The object of this study was to explore further phonological visual-auditory recognition tasks in a group of fifty-six healthy children (mean age: 9.9 ± 0.3) and to compare these data to those recorded in twenty-six age-matched dyslexic children (mean age: 9.8 ± 0.2). Eye movements from both eyes were recorded using an infrared video-oculography system (MobileEBT® e(y)e BRAIN). The recognition task was performed under four conditions in which the target object was displayed either with phonologically unrelated objects (baseline condition), or with cohort or rhyme objects (cohort and rhyme conditions, respectively), or both together (rhyme + cohort condition). The percentage of the total time spent on the targets and the latency of the first saccade on the target were measured. Results in healthy children showed that the percentage of the total time spent in the baseline condition was significantly longer than in the other conditions, and that the latency of the first saccade in the cohort condition was significantly longer than in the other conditions; interestingly, the latency decreased significantly with the increasing age of the children. The developmental trend of phonological awareness was also observed in healthy children only. In contrast, we observed that for dyslexic children the total time spent on the target was similar in all four conditions tested, and also that they had similar latency values in both cohort and rhyme conditions. These findings suggest a different sensitivity to the phonological competitors between dyslexic and non-dyslexic children. Also, the eye-tracking technique provides online information about phonological awareness capabilities in children. PMID:27438352

  7. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    PubMed

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  8. Mapping the emotional face. How individual face parts contribute to successful emotion recognition

    PubMed Central

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation. PMID:28493921

  9. Sensitive fluorescence on-off probes for the fast detection of a chemical warfare agent mimic.

    PubMed

    Khan, Muhammad Shar Jhahan; Wang, Ya-Wen; Senge, Mathias O; Peng, Yu

    2018-01-15

    Two highly sensitive probes bearing a nucleophilic imine moiety have been utilized for the selective detection of chemical warfare agent (CWA) mimics. Diethyl chlorophosphate (DCP) was used as mimic CWAs. Both iminocoumarin-benzothiazole-based probes not only demonstrated a remarkable fluorescence ON-OFF response and good recognition, but also exhibited fast response times (10s) along with color changes upon addition of DCP. Limits of detection for the two sensors 1 and 2 were calculated as 0.065μM and 0.21μM, respectively, which are much lower than most other reported probes. These two probes not only show high sensitivity and selectivity in solution, but can also be applied for the recognition of DCP in the gas state, with significant color changes easily observed by the naked eye. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Rapid assessment method for prevalence and intensity of Loa loa infection.

    PubMed Central

    Takougang, Innocent; Meremikwu, Martin; Wandji, Samuel; Yenshu, Emmanuel V.; Aripko, Ben; Lamlenn, Samson B.; Eka, Braide L.; Enyong, Peter; Meli, Jean; Kale, Oladele; Remme, Jan H.

    2002-01-01

    OBJECTIVE: To assess the validity of observations on eye worm and Calabar swellings for the rapid assessment of the prevalence and intensity of loiasis at the community level. METHOD: A total of 12895 individuals over the age of 15 years living in 102 communities in Cameroon and Nigeria took part in the study. A standardized questionnaire was administered to participants from whom finger-prick blood samples were collected and examined for Loa loa microfilariae. Rapid assessments of the prevalence and intensity of loiasis were made on the basis of a history of eye worm or Calabar swellings. FINDINGS: There was a strong correlation between the indices of the rapid assessment procedures and the parasitological indices of L. loa endemicity. The rapid assessment indices were effective in diagnosing high-risk communities (sensitivity 94-100%; specificity 66-92%). The highest sensitivity (100%) and specificity (92%) were obtained with a rapid assessment procedure based on a history of eye worm lasting 1-7 days together with confirmation by the guided recognition of a photograph of adult L. loa in the eye. CONCLUSION: Rapid assessment of the prevalence and intensity of loiasis at the community level can be achieved using a procedure based on the history of eye worm lasting 1-7 days together with confirmation by the guided recognition of a photograph of an adult L. loa in the eye. PMID:12481206

  11. Visual acuity at 10 years in Cryotherapy for Retinopathy of Prematurity (CRYO-ROP) study eyes: effect of retinal residua of retinopathy of prematurity.

    PubMed

    Dobson, Velma; Quinn, Graham E; Summers, C Gail; Hardy, Robert J; Tung, Betty

    2006-02-01

    To describe recognition (letter) acuity at age 10 years in eyes with and without retinal residua of retinopathy of prematurity (ROP). Presence and severity of ROP residua were documented by a study ophthalmologist. Masked testers measured monocular recognition visual acuity (Early Treatment of Diabetic Retinopathy Study) when the children were 10 years old. Two hundred forty-seven of 255 surviving Cryotherapy for Retinopathy of Prematurity (CRYO-ROP) randomized trial patients participated. A reference group of 102 of 104 Philadelphia-based CRYO-ROP study participants who did not develop ROP was also tested. More severe retinal residua were associated with worse visual acuity, regardless of whether retinal ablation was performed to treat the severe acute-phase ROP. However, within each ROP residua category, there was a wide range of visual acuity results. This is the first report of the relation between visual acuity (Early Treatment of Diabetic Retinopathy Study charts) and structural abnormalities related to ROP in a large group of eyes that developed threshold ROP in the perinatal period. Visual deficits are greater in eyes with more severe retinal residua than in eyes with mild or no residua. However, severity of ROP residua does not predict the visual acuity of an individual eye because within a single residua category, acuity may range from near normal to blind.

  12. Selective Attention in Vision: Recognition Memory for Superimposed Line Drawings.

    ERIC Educational Resources Information Center

    Goldstein, E. Bruce; Fink, Susan I.

    1981-01-01

    Four experiments show that observers can selectively attend to one of two stationary superimposed pictures. Selective recognition occurred with large displays in which observers were free to make eye movements during a 3-sec exposure and with small displays in which observers were instructed to fixate steadily on a point. (Author/RD)

  13. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  14. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    PubMed

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    PubMed Central

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  16. [Eyes test performance among unaffected mothers of patients with schizophrenia].

    PubMed

    Birdal, Seval; Yıldırım, Ejder Akgün; Arslan Delice, Mehtap; Yavuz, Kasım Fatih; Kurt, Erhan

    2015-01-01

    Theory of Mind (ToM) deficit is a widely accepted feature of schizophrenia. A number of studies have examined ToM deficits of first degree relatives of schizophrenic patients as genetic markers of schizophrenia. Examination of mentalization capacity among mothers of schizophrenia patients may improve our understanding of theory of mind impairments in schizophrenia. The aim of this study is to use Reading Mind in the Eyes test to examine theory of mind capacity among mothers of schizophrenic patients. Performance during the test "Reading the Mind in the Eyes" (Eyes Test) was compared between the mothers of schizophrenic patients (n=47) and mothers whose children have no psychotic mental illness (n=47). Test results were analyzed based on the categorization of test items as positive, negative, and neutral. Mothers of schizophrenic patients displayed poorer performance during the Eyes Test compare to mothers in the control group, particularly in the recognition of positive and neutral mental representations. There was no statistically significant difference in the recognition of negative mental representations between mothers of patients and the control groups. The results of this study indicate that mothers of schizophrenic patients differ in some theory of mind patterns. Theory of mind may be an important developmental or endophenotipic factor in the pathogenesis of schizophrenia and should be further evaluated using other biological markers.

  17. Study of optical design of three-dimensional digital ophthalmoscopes.

    PubMed

    Fang, Yi-Chin; Yen, Chih-Ta; Chu, Chin-Hsien

    2015-10-01

    This study primarily involves using optical zoom structures to design a three-dimensional (3D) human-eye optical sensory system with infrared and visible light. According to experimental data on two-dimensional (2D) and 3D images, human-eye recognition of 3D images is substantially higher (approximately 13.182%) than that of 2D images. Thus, 3D images are more effective than 2D images when they are used at work or in high-recognition devices. In the optical system design, infrared and visible light wavebands were incorporated as light sources to perform simulations. The results can be used to facilitate the design of optical systems suitable for 3D digital ophthalmoscopes.

  18. Using Eye Tracking to Understand the Responses of Learners to Vocabulary Learning Strategy Instruction and Use

    ERIC Educational Resources Information Center

    Liu, Pei-Lin

    2014-01-01

    This study examined the influence of morphological instruction in an eye-tracking English vocabulary recognition task. Sixty-eight freshmen enrolled in an English course and received either traditional or morphological instruction for learning English vocabulary. The experimental part of the study was conducted over two-hour class periods for…

  19. Can Gaze Avoidance Explain Why Individuals with Asperger's Syndrome Can't Recognise Emotions from Facial Expressions?

    ERIC Educational Resources Information Center

    Sawyer, Alyssa C. P.; Williamson, Paul; Young, Robyn L.

    2012-01-01

    Research has shown that individuals with Autism Spectrum Disorders (ASD) have difficulties recognising emotions from facial expressions. Since eye contact is important for accurate emotion recognition, and individuals with ASD tend to avoid eye contact, this tendency for gaze aversion has been proposed as an explanation for the emotion recognition…

  20. The Eyes Know Time: A Novel Paradigm to Reveal the Development of Temporal Memory

    ERIC Educational Resources Information Center

    Pathman, Thanujeni; Ghetti, Simona

    2014-01-01

    Temporal memory in 7-year-olds, 10-year-olds, and young adults (N = 78) was examined introducing a novel eye-movement paradigm. Participants learned object sequences and were tested under three conditions: temporal order, temporal context, and recognition. Age-related improvements in accuracy were found across conditions; accuracy in the temporal…

  1. Theories of Spoken Word Recognition Deficits in Aphasia: Evidence from Eye-Tracking and Computational Modeling

    ERIC Educational Resources Information Center

    Mirman, Daniel; Yee, Eiling; Blumstein, Sheila E.; Magnuson, James S.

    2011-01-01

    We used eye-tracking to investigate lexical processing in aphasic participants by examining the fixation time course for rhyme (e.g., "carrot-parrot") and cohort (e.g., "beaker-beetle") competitors. Broca's aphasic participants exhibited larger rhyme competition effects than age-matched controls. A re-analysis of previously reported data (Yee,…

  2. Familiarity and Recollection Produce Distinct Eye Movement, Pupil and Medial Temporal Lobe Responses when Memory Strength Is Matched

    ERIC Educational Resources Information Center

    Kafkas, Alexandros; Montaldi, Daniela

    2012-01-01

    Two experiments explored eye measures (fixations and pupil response patterns) and brain responses (BOLD) accompanying the recognition of visual object stimuli based on familiarity and recollection. In both experiments, the use of a modified remember/know procedure led to high confidence and matched accuracy levels characterising strong familiarity…

  3. Eye-Tracking Study on Facial Emotion Recognition Tasks in Individuals with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tsang, Vicky

    2018-01-01

    The eye-tracking experiment was carried out to assess fixation duration and scan paths that individuals with and without high-functioning autism spectrum disorders employed when identifying simple and complex emotions. Participants viewed human photos of facial expressions and decided on the identification of emotion, the negative-positive emotion…

  4. [Neurological disease and facial recognition].

    PubMed

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  5. 'Reading the Mind in the Eyes': an fMRI study of adolescents with autism and their siblings.

    PubMed

    Holt, R J; Chura, L R; Lai, M-C; Suckling, J; von dem Hagen, E; Calder, A J; Bullmore, E T; Baron-Cohen, S; Spencer, M D

    2014-11-01

    Mentalizing deficits are a hallmark of the autism spectrum condition (ASC) and a potential endophenotype for atypical social cognition in ASC. Differences in performance and neural activation on the 'Reading the Mind in the Eyes' task (the Eyes task) have been identified in individuals with ASC in previous studies. Performance on the Eyes task along with the associated neural activation was examined in adolescents with ASC (n = 50), their unaffected siblings (n = 40) and typically developing controls (n = 40). Based on prior literature that males and females with ASC display different cognitive and associated neural characteristics, analyses were stratified by sex. Three strategies were applied to test for endophenotypes at the level of neural activation: (1) identifying and locating conjunctions of ASC-control and sibling-control differences; (2) examining whether the sibling group is comparable to the ASC or intermediate between the ASC and control groups; and (3) examining spatial overlaps between ASC-control and sibling-control differences across multiple thresholds. Impaired behavioural performance on the Eyes task was observed in males with ASC compared to controls, but only at trend level in females; and no difference in performance was identified between sibling and same-sex control groups in both sexes. Neural activation showed a substantial endophenotype effect in the female groups but this was only modest in the male groups. Behavioural impairment on complex emotion recognition associated with mental state attribution is a phenotypic, rather than an endophenotypic, marker of ASC. However, the neural response during the Eyes task is a potential endophenotypic marker for ASC, particularly in females.

  6. The optimal viewing position effect in printed versus cursive words: Evidence of a reading cost for the cursive font.

    PubMed

    Danna, Jérémy; Massendari, Delphine; Furnari, Benjamin; Ducrot, Stéphanie

    2018-06-13

    Two eye-movement experiments were conducted to examine the effects of font type on the recognition of words presented in central vision, using a variable-viewing-position technique. Two main questions were addressed: (1) Is the optimal viewing position (OVP) for word recognition modulated by font type? (2) Is the cursive font more appropriate than the printed font in word recognition in children who exclusively write using a cursive script? In order to disentangle the role of perceptual difficulty associated with the cursive font and the impact of writing habits, we tested French adults (Experiment 1) and second-grade French children, the latter having exclusively learned to write in cursive (Experiment 2). Results revealed that the printed font is more appropriate than the cursive for recognizing words in both adults and children: adults were slightly less accurate in cursive than in printed stimuli recognition and children were slower to identify cursive stimuli than printed stimuli. Eye-movement measures also revealed that the OVP curves were flattened in cursive font in both adults and children. We concluded that the perceptual difficulty of the cursive font degrades word recognition by impacting the OVP stability. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Natural user interface as a supplement of the holographic Raman tweezers

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  8. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD

    ERIC Educational Resources Information Center

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-01-01

    This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…

  9. The Role of Clarity and Blur in Guiding Visual Attention in Photographs

    ERIC Educational Resources Information Center

    Enns, James T.; MacDonald, Sarah C.

    2013-01-01

    Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…

  10. An Inner Face Advantage in Children's Recognition of Familiar Peers

    ERIC Educational Resources Information Center

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  11. No differences in emotion recognition strategies in children with autism spectrum disorder: evidence from hybrid faces.

    PubMed

    Evers, Kris; Kerkhof, Inneke; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2014-01-01

    Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD). However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD) group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region) with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness) or in the mouth region (so-called bottom-emotions: sadness, anger, and fear). No stronger reliance on mouth information was found in children with ASD.

  12. Social skills training for children with autism spectrum disorder using a robotic behavioral intervention system.

    PubMed

    Yun, Sang-Seok; Choi, JongSuk; Park, Sung-Kee; Bong, Gui-Young; Yoo, HeeJeong

    2017-07-01

    We designed a robot system that assisted in behavioral intervention programs of children with autism spectrum disorder (ASD). The eight-session intervention program was based on the discrete trial teaching protocol and focused on two basic social skills: eye contact and facial emotion recognition. The robotic interactions occurred in four modules: training element query, recognition of human activity, coping-mode selection, and follow-up action. Children with ASD who were between 4 and 7 years old and who had verbal IQ ≥ 60 were recruited and randomly assigned to the treatment group (TG, n = 8, 5.75 ± 0.89 years) or control group (CG, n = 7; 6.32 ± 1.23 years). The therapeutic robot facilitated the treatment intervention in the TG, and the human assistant facilitated the treatment intervention in the CG. The intervention procedures were identical in both groups. The primary outcome measures included parent-completed questionnaires, the Autism Diagnostic Observation Schedule (ADOS), and frequency of eye contact, which was measured with the partial interval recording method. After completing treatment, the eye contact percentages were significantly increased in both groups. For facial emotion recognition, the percentages of correct answers were increased in similar patterns in both groups compared to baseline (P > 0.05), with no difference between the TG and CG (P > 0.05). The subjects' ability to play, general behavioral and emotional symptoms were significantly diminished after treatment (p < 0.05). These results showed that the robot-facilitated and human-facilitated behavioral interventions had similar positive effects on eye contact and facial emotion recognition, which suggested that robots are useful mediators of social skills training for children with ASD. Autism Res 2017,. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1306-1323. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  13. Action and Emotion Recognition from Point Light Displays: An Investigation of Gender Differences

    PubMed Central

    Alaerts, Kaat; Nackaerts, Evelien; Meyns, Pieter; Swinnen, Stephan P.; Wenderoth, Nicole

    2011-01-01

    Folk psychology advocates the existence of gender differences in socio-cognitive functions such as ‘reading’ the mental states of others or discerning subtle differences in body-language. A female advantage has been demonstrated for emotion recognition from facial expressions, but virtually nothing is known about gender differences in recognizing bodily stimuli or body language. The aim of the present study was to investigate potential gender differences in a series of tasks, involving the recognition of distinct features from point light displays (PLDs) depicting bodily movements of a male and female actor. Although recognition scores were considerably high at the overall group level, female participants were more accurate than males in recognizing the depicted actions from PLDs. Response times were significantly higher for males compared to females on PLD recognition tasks involving (i) the general recognition of ‘biological motion’ versus ‘non-biological’ (or ‘scrambled’ motion); or (ii) the recognition of the ‘emotional state’ of the PLD-figures. No gender differences were revealed for a control test (involving the identification of a color change in one of the dots) and for recognizing the gender of the PLD-figure. In addition, previous findings of a female advantage on a facial emotion recognition test (the ‘Reading the Mind in the Eyes Test’ (Baron-Cohen, 2001)) were replicated in this study. Interestingly, a strong correlation was revealed between emotion recognition from bodily PLDs versus facial cues. This relationship indicates that inter-individual or gender-dependent differences in recognizing emotions are relatively generalized across facial and bodily emotion perception. Moreover, the tight correlation between a subject's ability to discern subtle emotional cues from PLDs and the subject's ability to basically discriminate biological from non-biological motion provides indications that differences in emotion recognition may - at least to some degree – be related to more basic differences in processing biological motion per se. PMID:21695266

  14. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze.

    PubMed

    Wells, Laura Jean; Gillespie, Steven Mark; Rotshtein, Pia

    2016-01-01

    The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down.

  15. Decoding facial expressions based on face-selective and motion-sensitive areas.

    PubMed

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Identification of Emotional Facial Expressions: Effects of Expression, Intensity, and Sex on Eye Gaze

    PubMed Central

    Rotshtein, Pia

    2016-01-01

    The identification of emotional expressions is vital for social interaction, and can be affected by various factors, including the expressed emotion, the intensity of the expression, the sex of the face, and the gender of the observer. This study investigates how these factors affect the speed and accuracy of expression recognition, as well as dwell time on the two most significant areas of the face: the eyes and the mouth. Participants were asked to identify expressions from female and male faces displaying six expressions (anger, disgust, fear, happiness, sadness, and surprise), each with three levels of intensity (low, moderate, and normal). Overall, responses were fastest and most accurate for happy expressions, but slowest and least accurate for fearful expressions. More intense expressions were also classified most accurately. Reaction time showed a different pattern, with slowest response times recorded for expressions of moderate intensity. Overall, responses were slowest, but also most accurate, for female faces. Relative to male observers, women showed greater accuracy and speed when recognizing female expressions. Dwell time analyses revealed that attention to the eyes was about three times greater than on the mouth, with fearful eyes in particular attracting longer dwell times. The mouth region was attended to the most for fearful, angry, and disgusted expressions and least for surprise. These results extend upon previous findings to show important effects of expression, emotion intensity, and sex on expression recognition and gaze behaviour, and may have implications for understanding the ways in which emotion recognition abilities break down. PMID:27942030

  17. Effects of Saccadic Bilateral Eye Movements on Memory in Children and Adults: An Exploratory Study

    ERIC Educational Resources Information Center

    Parker, Andrew; Dagnall, Neil

    2012-01-01

    The effects of saccadic bilateral (horizontal) eye movements on true and false memory in adults and children were investigated. Both adults and children encoded lists of associated words in the Deese-Roediger-McDermott paradigm followed by a test of recognition memory. Just prior to retrieval, participants were asked to engage in 30 s of bilateral…

  18. Effects of Aging and Noise on Real-Time Spoken Word Recognition: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Ben-David, Boaz M.; Chambers, Craig G.; Daneman, Meredyth; Pichora-Fuller, M. Kathleen; Reingold, Eyal M.; Schneider, Bruce A.

    2011-01-01

    Purpose: To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. Method: Twenty-four younger and 24 older adults followed spoken instructions referring to depicted…

  19. Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.

    PubMed

    Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael

    2014-06-01

    The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Relations between scanning and recognition of own- and other-race faces in 6- and 9-month-old infants.

    PubMed

    Liu, Shaoying; Quinn, Paul C; Xiao, Naiqi G; Wu, Zhijun; Liu, Guangxi; Lee, Kang

    2018-06-01

    Infants typically see more own-race faces than other-race faces. Existing evidence shows that this difference in face race experience has profound consequences for face processing: as early as 6 months of age, infants scan own- and other-race faces differently and display superior recognition for own- relative to other-race faces. However, it is unclear whether scanning of own-race faces is related to the own-race recognition advantage in infants. To bridge this gap in the literature, the current study used eye tracking to investigate the relation between own-race face scanning and recognition in 6- and 9-month-old Asian infants (N = 82). The infants were familiarized with dynamic own- and other-race faces, and then their face recognition was tested with static face images. Both age groups recognized own- but not other-race faces. Also, regardless of race, the more infants scanned the eyes of the novel versus familiar faces at test, the better their face-recognition performance. In addition, both 6- and 9-month-olds fixated significantly longer on the nose of own-race faces, and greater fixation on the nose during test trials correlated positively with individual novelty preference scores in the own- but not other-race condition. The results suggest that some aspects of the relation between recognition and scanning are independent of differential experience with face race, whereas other aspects are affected by such experience. More broadly, the findings imply that scanning and recognition may become linked during infancy at least in part through the influence of perceptual experience. © 2018 The Institute of Psychology, Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.

  1. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    PubMed

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Effects of orthographic consistency on eye movement behavior: German and English children and adults process the same words differently.

    PubMed

    Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin

    2015-02-01

    The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Eye-closure-triggered paroxysmal activity and cognitive impairment: a case report.

    PubMed

    Termine, Cristiano; Rubboli, Guido; Veggiotti, Pierangelo

    2006-01-01

    To study the neuropsychological status of an epileptic patient presenting with epileptic activity triggered by eye closure in a 14-year follow-up period. The patient was studied at 12 and 26 years of age; during this period he underwent periodical clinical evaluations and EEG investigations; brain magnetic resonance imaging (MRI) was performed at 12 years of age. A neuropsychological assessment was carried out both at 12 years of age (T0) and at 26 years of age. At T0 and T1, neuropsychological tests (digits and words span, graphoestesia, reactions time to auditory stimuli, sentences repetition, words repetition, digital gnosis, backward counting [i.e.,100-0]) were performed during video-EEG monitoring either with eyes closed or with eyes open, to evaluate possible transitory effects related to ongoing epileptic activity. Moreover, at T0 the patient underwent Wechsler Intelligence Scale for Children-Revised, and at T1 to Wechsler Adult Intelligence Scale-Revised. EEG recordings showed continuous epileptic activity triggered by eye closure, disappearing only with eyes opening, both at T0 and T1 (in this latter case, anteriorly predominant). The results of neuropsychological assessment during eyes closed as compared to performances with eyes open did not show significant differences, at T0 as well as at T1. Wechsler Intelligence scales showed a deterioration of performances at T1 with respect to T0; in addition, at T1, attention and short-term memory abnormalities, impairment in facial recognition and block design, and defective results in Continuous Performance Test and Wisconsin Card Sorting Test were observed. Lack of differences between the results of neuropsychological tests performed with eyes closed as compared to the eyes open condition suggests that in our patient epileptic activity did not cause transitory cognitive abnormalities. Deterioration of Wechsler Intelligence Scales in the follow-up period might be interpreted as the result of a disruption of cognitive processes possibly related to the persistence of a continuous epileptic activity during eye closure over the years. We speculate whether a dysfunction in posterior cortical areas involved in visual processing might be related to the impairment in face recognition and block design tests as well to eye closure sensitivity.

  4. Investigating an Application of Speech-to-Text Recognition: A Study on Visual Attention and Learning Behaviour

    ERIC Educational Resources Information Center

    Huang, Y-M.; Liu, C-J.; Shadiev, Rustam; Shen, M-H.; Hwang, W-Y.

    2015-01-01

    One major drawback of previous research on speech-to-text recognition (STR) is that most findings showing the effectiveness of STR for learning were based upon subjective evidence. Very few studies have used eye-tracking techniques to investigate visual attention of students on STR-generated text. Furthermore, not much attention was paid to…

  5. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  6. Autistic trait interactions underlie sex-dependent facial recognition abilities in the normal population.

    PubMed

    Valla, Jeffrey M; Maendel, Jeffrey W; Ganzel, Barbara L; Barsky, Andrew R; Belmonte, Matthew K

    2013-01-01

    Autistic face processing difficulties are either uniquely social or due to a piecemeal cognitive "style." Co-morbidity of social deficits and piecemeal cognition in autism makes teasing apart these accounts difficult. These traits vary normally, and are more separable in the general population, suggesting another way to compare accounts. Participants completed the Autism Quotient survey of autistic traits, and one of three face recognition tests: full-face, eyes-only, or mouth-only. Social traits predicted performance in the full-face condition in both sexes. Eyes-only males' performance was predicted by a social × cognitive trait interaction: attention to detail boosted face recognition in males with few social traits, but hindered performance in those reporting many social traits. This suggests social/non-social Autism Spectrum Conditions (ASC) trait interactions at the behavioral level. In the presence of few ASC-like difficulties in social reciprocity, an ASC-like attention to detail may confer advantages on typical males' face recognition skills. On the other hand, when attention to detail co-occurs with difficulties in social reciprocity, a detailed focus may exacerbate such already present social difficulties, as is thought to occur in autism.

  7. Recognition memory strength is predicted by pupillary responses at encoding while fixation patterns distinguish recollection from familiarity.

    PubMed

    Kafkas, Alexandros; Montaldi, Daniela

    2011-10-01

    Thirty-five healthy participants incidentally encoded a set of man-made and natural object pictures, while their pupil response and eye movements were recorded. At retrieval, studied and new stimuli were rated as novel, familiar (strong, moderate, or weak), or recollected. We found that both pupil response and fixation patterns at encoding predict later recognition memory strength. The extent of pupillary response accompanying incidental encoding was found to be predictive of subsequent memory. In addition, the number of fixations was also predictive of later recognition memory strength, suggesting that the accumulation of greater visual detail, even for single objects, is critical for the creation of a strong memory. Moreover, fixation patterns at encoding distinguished between recollection and familiarity at retrieval, with more dispersed fixations predicting familiarity and more clustered fixations predicting recollection. These data reveal close links between the autonomic control of pupil responses and eye movement patterns on the one hand and memory encoding on the other. Moreover, the data illustrate quantitative as well as qualitative differences in the incidental visual processing of stimuli, which are differentially predictive of the strength and the kind of memory experienced at recognition.

  8. Breaking object correspondence across saccades impairs object recognition: The role of color and luminance.

    PubMed

    Poth, Christian H; Schneider, Werner X

    2016-09-01

    Rapid saccadic eye movements bring the foveal region of the eye's retina onto objects for high-acuity vision. Saccades change the location and resolution of objects' retinal images. To perceive objects as visually stable across saccades, correspondence between the objects before and after the saccade must be established. We have previously shown that breaking object correspondence across the saccade causes a decrement in object recognition (Poth, Herwig, & Schneider, 2015). Color and luminance can establish object correspondence, but it is unknown how these surface features contribute to transsaccadic visual processing. Here, we investigated whether changing the surface features color-and-luminance and color alone across saccades impairs postsaccadic object recognition. Participants made saccades to peripheral objects, which either maintained or changed their surface features across the saccade. After the saccade, participants briefly viewed a letter within the saccade target object (terminated by a pattern mask). Postsaccadic object recognition was assessed as participants' accuracy in reporting the letter. Experiment A used the colors green and red with different luminances as surface features, Experiment B blue and yellow with approximately the same luminances. Changing the surface features across the saccade deteriorated postsaccadic object recognition in both experiments. These findings reveal a link between object recognition and object correspondence relying on the surface features colors and luminance, which is currently not addressed in theories of transsaccadic perception. We interpret the findings within a recent theory ascribing this link to visual attention (Schneider, 2013).

  9. Eye movement identification based on accumulated time feature

    NASA Astrophysics Data System (ADS)

    Guo, Baobao; Wu, Qiang; Sun, Jiande; Yan, Hua

    2017-06-01

    Eye movement is a new kind of feature for biometrical recognition, it has many advantages compared with other features such as fingerprint, face, and iris. It is not only a sort of static characteristics, but also a combination of brain activity and muscle behavior, which makes it effective to prevent spoofing attack. In addition, eye movements can be incorporated with faces, iris and other features recorded from the face region into multimode systems. In this paper, we do an exploring study on eye movement identification based on the eye movement datasets provided by Komogortsev et al. in 2011 with different classification methods. The time of saccade and fixation are extracted from the eye movement data as the eye movement features. Furthermore, the performance analysis was conducted on different classification methods such as the BP, RBF, ELMAN and SVM in order to provide a reference to the future research in this field.

  10. Reading the mind in the infant eyes: paradoxical effects of oxytocin on neural activity and emotion recognition in watching pictures of infant faces.

    PubMed

    Voorthuis, Alexandra; Riem, Madelon M E; Van IJzendoorn, Marinus H; Bakermans-Kranenburg, Marian J

    2014-09-11

    The neuropeptide oxytocin facilitates parental caregiving and is involved in the processing of infant vocal cues. In this randomized-controlled trial with functional magnetic resonance imaging we examined the influence of intranasally administered oxytocin on neural activity during emotion recognition in infant faces. Blood oxygenation level dependent (BOLD) responses during emotion recognition were measured in 50 women who were administered 16 IU of oxytocin or a placebo. Participants performed an adapted version of the Infant Facial Expressions of Emotions from Looking at Pictures (IFEEL pictures), a task that has been developed to assess the perception and interpretation of infants' facial expressions. Experimentally induced oxytocin levels increased activation in the inferior frontal gyrus (IFG), the middle temporal gyrus (MTG) and the superior temporal gyrus (STG). However, oxytocin decreased performance on the IFEEL picture task. Our findings suggest that oxytocin enhances processing of facial cues of the emotional state of infants on a neural level, but at the same time it may decrease the correct interpretation of infants' facial expressions on a behavior level. This article is part of a Special Issue entitled Oxytocin and Social Behav. © 2013 Published by Elsevier B.V.

  11. View-Invariant Object Category Learning, Recognition, and Search: How Spatial and Object Attention are Coordinated Using Surface-Based Attentional Shrouds

    ERIC Educational Resources Information Center

    Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio

    2009-01-01

    How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified…

  12. Encoding Strategies in Primary School Children: Insights from an Eye-Tracking Approach and the Role of Individual Differences in Attentional Control

    ERIC Educational Resources Information Center

    Roebers, Claudia M.; Schmid, Corinne; Roderer, Thomas

    2010-01-01

    The authors explored different aspects of encoding strategy use in primary school children by including (a) an encoding strategy task in which children's encoding strategy use was recorded through a remote eye-tracking device and, later, free recall and recognition for target items was assessed; and (b) tasks measuring resistance to interference…

  13. Codebook-based electrooculography data analysis towards cognitive activity recognition.

    PubMed

    Lagodzinski, P; Shirahama, K; Grzegorzek, M

    2018-04-01

    With the advancement in mobile/wearable technology, people started to use a variety of sensing devices to track their daily activities as well as health and fitness conditions in order to improve the quality of life. This work addresses an idea of eye movement analysis, which due to the strong correlation with cognitive tasks can be successfully utilized in activity recognition. Eye movements are recorded using an electrooculographic (EOG) system built into the frames of glasses, which can be worn more unobtrusively and comfortably than other devices. Since the obtained information is low-level sensor data expressed as a sequence representing values in constant intervals (100 Hz), the cognitive activity recognition problem is formulated as sequence classification. However, it is unclear what kind of features are useful for accurate cognitive activity recognition. Thus, a machine learning algorithm like a codebook approach is applied, which instead of focusing on feature engineering is using a distribution of characteristic subsequences (codewords) to describe sequences of recorded EOG data, where the codewords are obtained by clustering a large number of subsequences. Further, statistical analysis of the codeword distribution results in discovering features which are characteristic to a certain activity class. Experimental results demonstrate good accuracy of the codebook-based cognitive activity recognition reflecting the effective usage of the codewords. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    PubMed Central

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  15. Eye Tracking Reveals a Crucial Role for Facial Motion in Recognition of Faces by Infants

    ERIC Educational Resources Information Center

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was…

  16. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    PubMed

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  17. Cyclosporine ophthalmic emulsions for the treatment of dry eye: a review of the clinical evidence

    PubMed Central

    Ames, Philip; Galor, Anat

    2015-01-01

    Dry eye has gained recognition as a public health problem given its high prevalence, morbidity and cost implications. Although dry eye is common and affects patients’ quality of life, only one medication, cyclosporine 0.05% emulsion, has been approved by the US FDA for its treatment. In this review, we summarize the basic science and clinical data regarding the use of cyclosporine in the treatment of dry eye. Randomized controlled trials showed that cyclosporine emulsion outperformed vehicles in the majority of trials, consistently decreasing corneal staining and increasing Schirmer scores. Symptom improvement was more variable, however, with ocular dryness shown to be the most consistently improved symptom over vehicle. PMID:25960865

  18. Disk space and load time requirements for eye movement biometric databases

    NASA Astrophysics Data System (ADS)

    Kasprowski, Pawel; Harezlak, Katarzyna

    2016-06-01

    Biometric identification is a very popular area of interest nowadays. Problems with the so-called physiological methods like fingerprints or iris recognition resulted in increased attention paid to methods measuring behavioral patterns. Eye movement based biometric (EMB) identification is one of the interesting behavioral methods and due to the intensive development of eye tracking devices it has become possible to define new methods for the eye movement signal processing. Such method should be supported by an efficient storage used to collect eye movement data and provide it for further analysis. The aim of the research was to check various setups enabling such a storage choice. There were various aspects taken into consideration, like disk space usage, time required for loading and saving whole data set or its chosen parts.

  19. Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition.

    PubMed

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-11-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. The Perception of Multiple Images

    ERIC Educational Resources Information Center

    Goldstein, E. Bruce

    1975-01-01

    A discussion of visual field, foveal and peripheral vision, eye fixations, recognition and recall of pictures, memory for meaning of pictures, and the relation between speed of presentation and memory. (Editor)

  1. Pattern Recognition Analysis of Age-Related Retinal Ganglion Cell Signatures in the Human Eye

    PubMed Central

    Yoshioka, Nayuta; Zangerl, Barbara; Nivison-Smith, Lisa; Khuu, Sieu K.; Jones, Bryan W.; Pfeiffer, Rebecca L.; Marc, Robert E.; Kalloniatis, Michael

    2017-01-01

    Purpose To characterize macular ganglion cell layer (GCL) changes with age and provide a framework to assess changes in ocular disease. This study used data clustering to analyze macular GCL patterns from optical coherence tomography (OCT) in a large cohort of subjects without ocular disease. Methods Single eyes of 201 patients evaluated at the Centre for Eye Health (Sydney, Australia) were retrospectively enrolled (age range, 20–85); 8 × 8 grid locations obtained from Spectralis OCT macular scans were analyzed with unsupervised classification into statistically separable classes sharing common GCL thickness and change with age. The resulting classes and gridwise data were fitted with linear and segmented linear regression curves. Additionally, normalized data were analyzed to determine regression as a percentage. Accuracy of each model was examined through comparison of predicted 50-year-old equivalent macular GCL thickness for the entire cohort to a true 50-year-old reference cohort. Results Pattern recognition clustered GCL thickness across the macula into five to eight spatially concentric classes. F-test demonstrated segmented linear regression to be the most appropriate model for macular GCL change. The pattern recognition–derived and normalized model revealed less difference between the predicted macular GCL thickness and the reference cohort (average ± SD 0.19 ± 0.92 and −0.30 ± 0.61 μm) than a gridwise model (average ± SD 0.62 ± 1.43 μm). Conclusions Pattern recognition successfully identified statistically separable macular areas that undergo a segmented linear reduction with age. This regression model better predicted macular GCL thickness. The various unique spatial patterns revealed by pattern recognition combined with core GCL thickness data provide a framework to analyze GCL loss in ocular disease. PMID:28632847

  2. Local ICA for the Most Wanted face recognition

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Szu, Harold H.; Markowitz, Zvi

    2000-04-01

    Facial disguises of FBI Most Wanted criminals are inevitable and anticipated in our design of automatic/aided target recognition (ATR) imaging systems. For example, man's facial hairs may hide his mouth and chin but not necessarily the nose and eyes. Sunglasses will cover the eyes but not the nose, mouth, and chins. This fact motivates us to build sets of the independent component analyses bases separately for each facial region of the entire alleged criminal group. Then, given an alleged criminal face, collective votes are obtained from all facial regions in terms of 'yes, no, abstain' and are tallied for a potential alarm. Moreover, and innocent outside shall fall below the alarm threshold and is allowed to pass the checkpoint. Such a PD versus FAR called ROC curve is obtained.

  3. Time in the eye of the beholder: Gaze position reveals spatial-temporal associations during encoding and memory retrieval of future and past.

    PubMed

    Martarelli, Corinna S; Mast, Fred W; Hartmann, Matthias

    2017-01-01

    Time is grounded in various ways, and previous studies point to a "mental time line" with past associated with the left, and future with the right side. In this study, we investigated whether spontaneous eye movements on a blank screen would follow a mental timeline during encoding, free recall, and recognition of past and future items. In all three stages of processing, gaze position was more rightward during future items compared to past items. Moreover, horizontal gaze position during encoding predicted horizontal gaze position during free recall and recognition. We conclude that mental time line and the stored gaze position during encoding assist memory retrieval of past versus future items. Our findings highlight the spatial nature of temporal representations.

  4. Upconverting device for enhanced recogntion of certain wavelengths of light

    DOEpatents

    Kross, Brian; McKIsson, John E; McKisson, John; Weisenberger, Andrew; Xi, Wenze; Zorn, Carl

    2013-05-21

    An upconverting device for enhanced recognition of selected wavelengths is provided. The device comprises a transparent light transmitter in combination with a plurality of upconverting nanoparticles. The device may a lens in eyewear or alternatively a transparent panel such as a window in an instrument or machine. In use the upconverting device is positioned between a light source and the eye(s) of the user of the upconverting device.

  5. Objects predict fixations better than early saliency.

    PubMed

    Einhäuser, Wolfgang; Spain, Merrielle; Perona, Pietro

    2008-11-20

    Humans move their eyes while looking at scenes and pictures. Eye movements correlate with shifts in attention and are thought to be a consequence of optimal resource allocation for high-level tasks such as visual recognition. Models of attention, such as "saliency maps," are often built on the assumption that "early" features (color, contrast, orientation, motion, and so forth) drive attention directly. We explore an alternative hypothesis: Observers attend to "interesting" objects. To test this hypothesis, we measure the eye position of human observers while they inspect photographs of common natural scenes. Our observers perform different tasks: artistic evaluation, analysis of content, and search. Immediately after each presentation, our observers are asked to name objects they saw. Weighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task. Also, saliency combined with object positions predicts which objects are frequently named. This suggests that early saliency has only an indirect effect on attention, acting through recognized objects. Consequently, rather than treating attention as mere preprocessing step for object recognition, models of both need to be integrated.

  6. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  7. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning

    NASA Astrophysics Data System (ADS)

    Fang, Leyuan; Yang, Liumao; Li, Shutao; Rabbani, Hossein; Liu, Zhimin; Peng, Qinghua; Chen, Xiangdong

    2017-06-01

    Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.

  8. The Impact of Early Bilingualism on Face Recognition Processes.

    PubMed

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker's face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals' face processing abilities differ from monolinguals'. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  9. A field study of the accuracy and reliability of a biometric iris recognition system.

    PubMed

    Latman, Neal S; Herb, Emily

    2013-06-01

    The iris of the eye appears to satisfy the criteria for a good anatomical characteristic for use in a biometric system. The purpose of this study was to evaluate a biometric iris recognition system: Mobile-Eyes™. The enrollment, verification, and identification applications were evaluated in a field study for accuracy and reliability using both irises of 277 subjects. Independent variables included a wide range of subject demographics, ambient light, and ambient temperature. A sub-set of 35 subjects had alcohol-induced nystagmus. There were 2710 identification and verification attempts, which resulted in 1,501,340 and 5540 iris comparisons respectively. In this study, the system successfully enrolled all subjects on the first attempt. All 277 subjects were successfully verified and identified on the first day of enrollment. None of the current or prior eye conditions prevented enrollment, verification, or identification. All 35 subjects with alcohol-induced nystagmus were successfully verified and identified. There were no false verifications or false identifications. Two conditions were identified that potentially could circumvent the use of iris recognitions systems in general. The Mobile-Eyes™ iris recognition system exhibited accurate and reliable enrollment, verification, and identification applications in this study. It may have special applications in subjects with nystagmus. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Empathy and aversion: the neural signature of mentalizing in Tourette syndrome.

    PubMed

    Eddy, C M; Cavanna, A E; Hansen, P C

    2017-02-01

    Previous studies suggest that adults with Tourette syndrome (TS) can respond unconventionally on tasks involving social cognition. We therefore hypothesized that these patients would exhibit different neural responses to healthy controls in response to emotionally salient expressions of human eyes. Twenty-five adults with TS and 25 matched healthy controls were scanned using fMRI during the standard version of the Reading the Mind in the Eyes Task which requires mental state judgements, and a novel comparison version requiring judgements about age. During prompted mental state recognition, greater activity was apparent in TS within left orbitofrontal cortex, posterior cingulate, right amygdala and right temporo-parietal junction (TPJ), while reduced activity was apparent in regions including left inferior parietal cortex. Age judgement elicited greater activity in TS within precuneus, medial prefrontal and temporal regions involved in mentalizing. The interaction between group and task revealed differential activity in areas including right inferior frontal gyrus. Task-related activity in the TPJ covaried with global ratings of the urge to tic. While recognizing mental states, adults with TS exhibit greater activity than controls in brain areas involved in the processing of negative emotion, in addition to reduced activity in regions associated with the attribution of agency. In addition, increased recruitment of areas involved in mental state reasoning is apparent in these patients when mentalizing is not a task requirement. Our findings highlight differential neural reactivity in response to emotive social cues in TS, which may interact with tic expression.

  11. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    PubMed

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  12. Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing

    PubMed Central

    Sammaknejad, Negar; Pouretemad, Hamidreza; Eslahchi, Changiz; Salahirad, Alireza; Alinejad, Ashkan

    2017-01-01

    Studies have revealed superior face recognition skills in females, partially due to their different eye movement strategies when encoding faces. In the current study, we utilized these slight but important differences and proposed a model that estimates the gender of the viewers and classifies them into two subgroups, males and females. An eye tracker recorded participant’s eye movements while they viewed images of faces. Regions of interest (ROIs) were defined for each face. Results showed that the gender dissimilarity in eye movements was not due to differences in frequency of fixations in the ROI s per se. Instead, it was caused by dissimilarity in saccade paths between the ROIs. The difference enhanced when saccades were towards the eyes. Females showed significant increase in transitions from other ROI s to the eyes. Consequently, the extraction of temporal transient information of saccade paths through a transition probability matrix, similar to a first order Markov chain model, significantly improved the accuracy of the gender classification results. PMID:29071007

  13. Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing.

    PubMed

    Sammaknejad, Negar; Pouretemad, Hamidreza; Eslahchi, Changiz; Salahirad, Alireza; Alinejad, Ashkan

    2017-01-01

    Studies have revealed superior face recognition skills in females, partially due to their different eye movement strategies when encoding faces. In the current study, we utilized these slight but important differences and proposed a model that estimates the gender of the viewers and classifies them into two subgroups, males and females. An eye tracker recorded participant's eye movements while they viewed images of faces. Regions of interest (ROIs) were defined for each face. Results showed that the gender dissimilarity in eye movements was not due to differences in frequency of fixations in the ROI s per se. Instead, it was caused by dissimilarity in saccade paths between the ROIs. The difference enhanced when saccades were towards the eyes. Females showed significant increase in transitions from other ROI s to the eyes. Consequently, the extraction of temporal transient information of saccade paths through a transition probability matrix, similar to a first order Markov chain model, significantly improved the accuracy of the gender classification results.

  14. Neuropathic ocular pain: an important yet underevaluated feature of dry eye

    PubMed Central

    Galor, A; Levitt, R C; Felix, E R; Martin, E R; Sarantopoulos, C D

    2015-01-01

    Dry eye has gained recognition as a public health problem given its prevalence, morbidity, and cost implications. Dry eye can have a variety of symptoms including blurred vision, irritation, and ocular pain. Within dry eye-associated ocular pain, some patients report transient pain whereas others complain of chronic pain. In this review, we will summarize the evidence that chronicity is more likely to occur in patients with dysfunction in their ocular sensory apparatus (ie, neuropathic ocular pain). Clinical evidence of dysfunction includes the presence of spontaneous dysesthesias, allodynia, hyperalgesia, and corneal nerve morphologic and functional abnormalities. Both peripheral and central sensitizations likely play a role in generating the noted clinical characteristics. We will further discuss how evaluating for neuropathic ocular pain may affect the treatment of dry eye-associated chronic pain. PMID:25376119

  15. Autoimmunity in the pathogenesis and treatment of keratoconjunctivitis sicca.

    PubMed

    Liu, Katy C; Huynh, Kyle; Grubbs, Joseph; Davis, Richard M

    2014-01-01

    Dry eye is a chronic corneal disease that impacts the quality of life of many older adults. Keratoconjunctivitis sicca (KCS), a form of aqueous-deficient dry eye, is frequently associated with Sjögren's syndrome and mechanisms of autoimmunity. For KCS and other forms of dry eye, current treatments are limited, with many medications providing only symptomatic relief rather than targeting the pathophysiology of disease. Here, we review proposed mechanisms in the pathogenesis of autoimmune-based KCS: genetic susceptibility and disruptions in antigen recognition, immune response, and immune regulation. By understanding the mechanisms of immune dysfunction through basic science and translational research, potential drug targets can be identified. Finally, we discuss current dry eye therapies as well as promising new treatment options and drug therapy targets.

  16. Females scan more than males: a potential mechanism for sex differences in recognition memory.

    PubMed

    Heisz, Jennifer J; Pottruff, Molly M; Shore, David I

    2013-07-01

    Recognition-memory tests reveal individual differences in episodic memory; however, by themselves, these tests provide little information regarding the stage (or stages) in memory processing at which differences are manifested. We used eye-tracking technology, together with a recognition paradigm, to achieve a more detailed analysis of visual processing during encoding and retrieval. Although this approach may be useful for assessing differences in memory across many different populations, we focused on sex differences in face memory. Females outperformed males on recognition-memory tests, and this advantage was directly related to females' scanning behavior at encoding. Moreover, additional exposures to the faces reduced sex differences in face recognition, which suggests that males may be able to improve their recognition memory by extracting more information at encoding through increased scanning. A strategy of increased scanning at encoding may prove to be a simple way to enhance memory performance in other populations with memory impairment.

  17. Face Recognition Using Local Quantized Patterns and Gabor Filters

    NASA Astrophysics Data System (ADS)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  18. How strongly do word reading times and lexical decision times correlate? Combining data from eye movement corpora and megastudies.

    PubMed

    Kuperman, Victor; Drieghe, Denis; Keuleers, Emmanuel; Brysbaert, Marc

    2013-01-01

    We assess the amount of shared variance between three measures of visual word recognition latencies: eye movement latencies, lexical decision times, and naming times. After partialling out the effects of word frequency and word length, two well-documented predictors of word recognition latencies, we see that 7-44% of the variance is uniquely shared between lexical decision times and naming times, depending on the frequency range of the words used. A similar analysis of eye movement latencies shows that the percentage of variance they uniquely share either with lexical decision times or with naming times is much lower. It is 5-17% for gaze durations and lexical decision times in studies with target words presented in neutral sentences, but drops to 0.2% for corpus studies in which eye movements to all words are analysed. Correlations between gaze durations and naming latencies are lower still. These findings suggest that processing times in isolated word processing and continuous text reading are affected by specific task demands and presentation format, and that lexical decision times and naming times are not very informative in predicting eye movement latencies in text reading once the effect of word frequency and word length are taken into account. The difference between controlled experiments and natural reading suggests that reading strategies and stimulus materials may determine the degree to which the immediacy-of-processing assumption and the eye-mind assumption apply. Fixation times are more likely to exclusively reflect the lexical processing of the currently fixated word in controlled studies with unpredictable target words rather than in natural reading of sentences or texts.

  19. Efficient visual information for unfamiliar face matching despite viewpoint variations: It's not in the eyes!

    PubMed

    Royer, Jessica; Blais, Caroline; Barnabé-Lortie, Vincent; Carré, Mélissa; Leclerc, Josiane; Fiset, Daniel

    2016-06-01

    Faces are encountered in highly diverse angles in real-world settings. Despite this considerable diversity, most individuals are able to easily recognize familiar faces. The vast majority of studies in the field of face recognition have nonetheless focused almost exclusively on frontal views of faces. Indeed, a number of authors have investigated the diagnostic facial features for the recognition of frontal views of faces previously encoded in this same view. However, the nature of the information useful for identity matching when the encoded face and test face differ in viewing angle remains mostly unexplored. The present study addresses this issue using individual differences and bubbles, a method that pinpoints the facial features effectively used in a visual categorization task. Our results indicate that the use of features located in the center of the face, the lower left portion of the nose area and the center of the mouth, are significantly associated with individual efficiency to generalize a face's identity across different viewpoints. However, as faces become more familiar, the reliance on this area decreases, while the diagnosticity of the eye region increases. This suggests that a certain distinction can be made between the visual mechanisms subtending viewpoint invariance and face recognition in the case of unfamiliar face identification. Our results further support the idea that the eye area may only come into play when the face stimulus is particularly familiar to the observer. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Human-Computer Interaction in Smart Environments

    PubMed Central

    Paravati, Gianluca; Gatteschi, Valentina

    2015-01-01

    Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  1. 29 CFR 29.13 - Recognition of State Apprenticeship Agencies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true Recognition of State Apprenticeship Agencies. 29.13 Section... PROGRAMS § 29.13 Recognition of State Apprenticeship Agencies. (a) Recognition. The Department may exercise its authority to grant recognition to a State Apprenticeship Agency. Recognition confers non-exclusive...

  2. [Developmental change in facial recognition by premature infants during infancy].

    PubMed

    Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu

    2014-09-01

    Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.

  3. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD.

    PubMed

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-07-01

    This study examined the extent to which a computer-based social skills intervention called FaceSay was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). FaceSay offers students simulated practice with eye gaze, joint attention, and facial recognition skills. This randomized control trial included school-aged children meeting educational criteria for autism (N = 31). Results demonstrated that participants who received the intervention improved their affect recognition and mentalizing skills, as well as their social skills. These findings suggest that, by targeting face-processing skills, computer-based interventions may produce changes in broader cognitive and social-skills domains in a cost- and time-efficient manner.

  4. Using computerized games to teach face recognition skills to children with autism spectrum disorder: the Let's Face It! program.

    PubMed

    Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D; Schultz, Robert T

    2010-08-01

    An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. In a randomized clinical trial, children diagnosed with autism spectrum disorder were pre-screened with a battery of subtests (the Let's Face It! Skills battery) examining face and object processing abilities. Participants who were significantly impaired in their face processing abilities were assigned to either a treatment or a waitlist group. Children in the treatment group (N = 42) received 20 hours of face training with the Let's Face It! (LFI!) computer-based intervention. The LFI! program is comprised of seven interactive computer games that target the specific face impairments associated with autism, including the recognition of identity across image changes in expression, viewpoint and features, analytic and holistic face processing strategies and attention to information in the eye region. Time 1 and Time 2 performance for the treatment and waitlist groups was assessed with the Let's Face It! Skills battery. The main finding was that relative to the control group (N = 37), children in the face training group demonstrated reliable improvements in their analytic recognition of mouth features and holistic recognition of a face based on its eyes features. These results indicate that a relatively short-term intervention program can produce measurable improvements in the face recognition skills of children with autism. As a treatment for face processing deficits, the Let's Face It! program has advantages of being cost-free, adaptable to the specific learning needs of the individual child and suitable for home and school applications.

  5. How a Hat May Affect 3-Month-Olds' Recognition of a Face: An Eye-Tracking Study

    PubMed Central

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants’ face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants’ ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants’ face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants’ ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants’ attention, interfering with the recognition process and preventing the infants’ preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment. PMID:24349378

  6. How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.

    PubMed

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.

  7. Multi-modal low cost mobile indoor surveillance system on the Robust Artificial Intelligence-based Defense Electro Robot (RAIDER)

    NASA Astrophysics Data System (ADS)

    Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.

  8. Tracking down the path of memory: eye scanpaths facilitate retrieval of visuospatial information.

    PubMed

    Bochynska, Agata; Laeng, Bruno

    2015-09-01

    Recent research points to a crucial role of eye fixations on the same spatial locations where an item appeared when learned, for the successful retrieval of stored information (e.g., Laeng et al. in Cognition 131:263-283, 2014. doi: 10.1016/j.cognition.2014.01.003 ). However, evidence about whether the specific temporal sequence (i.e., scanpath) of these eye fixations is also relevant for the accuracy of memory remains unclear. In the current study, eye fixations were recorded while looking at a checkerboard-like pattern. In a recognition session (48 h later), animations were shown where each square that formed the pattern was presented one by one, either according to the same, idiosyncratic, temporal sequence in which they were originally viewed by each participant or in a shuffled sequence although the squares were, in both conditions, always in their correct positions. Afterward, participants judged whether they had seen the same pattern before or not. Showing the elements serially according to the original scanpath's sequence yielded a significantly better recognition performance than the shuffled condition. In a forced fixation condition, where the gaze was maintained on the center of the screen, the advantage of memory accuracy for same versus shuffled scanpaths disappeared. Concluding, gaze scanpaths (i.e., the order of fixations and not simply their positions) are functional to visual memory and physical reenacting of the original, embodied, perception can facilitate retrieval.

  9. Biometric recognition via fixation density maps

    NASA Astrophysics Data System (ADS)

    Rigas, Ioannis; Komogortsev, Oleg V.

    2014-05-01

    This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.

  10. You Look Familiar: How Malaysian Chinese Recognize Faces

    PubMed Central

    Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth

    2012-01-01

    East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762

  11. Boldness psychopathic traits predict reduced gaze toward fearful eyes in men with a history of violence.

    PubMed

    Gillespie, Steven M; Rotshtein, Pia; Beech, Anthony R; Mitchell, Ian J

    2017-09-01

    Research with developmental and adult samples has shown a relationship of psychopathic traits with reduced eye gaze. However, these relationships remained to be investigated among forensic samples. Here we examined the eye movements of male violent offenders during an emotion recognition task. Violent offenders performed similar to non-offending controls, and their eye movements varied with the emotion and intensity of the facial expression. In the violent offender group Boldness psychopathic traits, but not Meanness or Disinhibition, were associated with reduced dwell time and fixation counts, and slower first fixation latencies, on the eyes compared with the mouth. These results are the first to show a relationship of psychopathic traits with reduced attention to the eyes in a forensic sample, and suggest that Boldness is associated with difficulties in orienting attention toward emotionally salient aspects of the face. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  12. The impact of fatigue on latent print examinations as revealed by behavioral and eye gaze testing.

    PubMed

    Busey, Thomas; Swofford, Henry J; Vanderkolk, John; Emerick, Brandi

    2015-06-01

    Eye tracking and behavioral methods were used to assess the effects of fatigue on performance in latent print examiners. Eye gaze was measured both before and after a fatiguing exercise involving fine-grained examination decisions. The eye tracking tasks used similar images, often laterally reversed versions of previously viewed prints, which holds image detail constant while minimizing prior recognition. These methods, as well as a within-subject design with fine grained analyses of the eye gaze data, allow fairly strong conclusions despite a relatively small subject population. Consistent with the effects of fatigue on practitioners in other fields such as radiology, behavioral performance declined with fatigue, and the eye gaze statistics suggested a smaller working memory capacity. Participants also terminated the search/examination process sooner when fatigued. However, fatigue did not produce changes in inter-examiner consistency as measured by the Earth Mover Metric. Implications for practice are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Autoimmunity in the Pathogenesis and Treatment of Keratoconjunctivitis Sicca

    PubMed Central

    Liu, Katy C.; Huynh, Kyle; Grubbs, Joseph; Davis, Richard M.

    2014-01-01

    Dry eye is a chronic corneal disease that impacts the quality of life of many older adults. keratoconjunctivitis sicca (KCS), a form of aqueous-deficient dry eye, is frequently associated with Sjögren’s syndrome and mechanisms of autoimmunity. For KCS and other forms of dry eye, current treatments are limited, with many medications providing only symptomatic relief rather than targeting the pathophysiology of disease. Here, we review proposed mechanisms in the pathogenesis of autoimmune-based KCS: genetic susceptibility and disruptions in antigen recognition, immune response, and immune regulation. By understanding the mechanisms of immune dysfunction through basic science and translational research, potential drug targets can be identified. Finally, we discuss current dry eye therapies as well as promising new treatment options and drug therapy targets. PMID:24395332

  14. Periodicity analysis on cat-eye reflected beam profiles of optical detectors

    NASA Astrophysics Data System (ADS)

    Gong, Mali; He, Sifeng

    2017-05-01

    The cat-eye effect reflected beam profiles of most optical detectors have a certain characteristic of periodicity, which is caused by array arrangement of sensors at their optical focal planes. It is the first time to find and prove that the reflected beam profile becomes several periodic spots at the reflected propagation distance corresponding to half the imaging distance of a CCD camera. Furthermore, the spatial cycle of these spots is approximately constant, independent of the CCD camera's imaging distance, which is related only to the focal length and pixel size of the CCD sensor. Thus, we can obtain the imaging distance and intrinsic parameters of the optical detector by analyzing its cat-eye reflected beam profiles. This conclusion can be applied in the field of non-cooperative cat-eye target recognition.

  15. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    DTIC Science & Technology

    2010-04-01

    County Sheriff’s Department, use certain measurements such as the distance between eyes, the length of the nose, or the shape of the ears. 8 However...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software...stream of data. High resolution video systems, such as those described below will be able to capture orders of magnitude more data in one video frame

  16. Looking at My Own Face: Visual Processing Strategies in Self–Other Face Recognition

    PubMed Central

    Chakraborty, Anya; Chakrabarti, Bhismadev

    2018-01-01

    We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner. PMID:29487554

  17. The influence of variations in eating disorder-related symptoms on processing of emotional faces in a non-clinical female sample: An eye-tracking study.

    PubMed

    Sharpe, Emma; Wallis, Deborah J; Ridout, Nathan

    2016-06-30

    This study aimed to: (i) determine if the attention bias towards angry faces reported in eating disorders generalises to a non-clinical sample varying in eating disorder-related symptoms; (ii) examine if the bias occurs during initial orientation or later strategic processing; and (iii) confirm previous findings of impaired facial emotion recognition in non-clinical disordered eating. Fifty-two females viewed a series of face-pairs (happy or angry paired with neutral) whilst their attentional deployment was continuously monitored using an eye-tracker. They subsequently identified the emotion portrayed in a separate series of faces. The highest (n=18) and lowest scorers (n=17) on the Eating Disorders Inventory (EDI) were compared on the attention and facial emotion recognition tasks. Those with relatively high scores exhibited impaired facial emotion recognition, confirming previous findings in similar non-clinical samples. They also displayed biased attention away from emotional faces during later strategic processing, which is consistent with previously observed impairments in clinical samples. These differences were related to drive-for-thinness. Although we found no evidence of a bias towards angry faces, it is plausible that the observed impairments in emotion recognition and avoidance of emotional faces could disrupt social functioning and act as a risk factor for the development of eating disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. X-Eye: a novel wearable vision system

    NASA Astrophysics Data System (ADS)

    Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye

    2011-03-01

    This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.

  19. Processing of Facial Emotion in Bipolar Depression and Euthymia.

    PubMed

    Robinson, Lucy J; Gray, John M; Burt, Mike; Ferrier, I Nicol; Gallagher, Peter

    2015-10-01

    Previous studies of facial emotion processing in bipolar disorder (BD) have reported conflicting findings. In independently conducted studies, we investigate facial emotion labeling in euthymic and depressed BD patients using tasks with static and dynamically morphed images of different emotions displayed at different intensities. Study 1 included 38 euthymic BD patients and 28 controls. Participants completed two tasks: labeling of static images of basic facial emotions (anger, disgust, fear, happy, sad) shown at different expression intensities; the Eyes Test (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), which involves recognition of complex emotions using only the eye region of the face. Study 2 included 53 depressed BD patients and 47 controls. Participants completed two tasks: labeling of "dynamic" facial expressions of the same five basic emotions; the Emotional Hexagon test (Young, Perret, Calder, Sprengelmeyer, & Ekman, 2002). There were no significant group differences on any measures of emotion perception/labeling, compared to controls. A significant group by intensity interaction was observed in both emotion labeling tasks (euthymia and depression), although this effect did not survive the addition of measures of executive function/psychomotor speed as covariates. Only 2.6-15.8% of euthymic patients and 7.8-13.7% of depressed patients scored below the 10th percentile of the controls for total emotion recognition accuracy. There was no evidence of specific deficits in facial emotion labeling in euthymic or depressed BD patients. Methodological variations-including mood state, sample size, and the cognitive demands of the tasks-may contribute significantly to the variability in findings between studies.

  20. Callous-unemotional traits are associated with deficits in recognizing complex emotions in preadolescent children.

    PubMed

    Sharp, Carla; Vanwoerden, Salome; Van Baardewijk, Y; Tackett, J L; Stegge, H

    2015-06-01

    The aims of the current study were to show that the affective component of psychopathy (callous-unemotional traits) is related to deficits in recognizing emotions over and above other psychopathy dimensions and to show that this relationship is driven by a specific deficit in recognizing complex emotions more so than basic emotions. The authors administered the Child Eyes Test to assess emotion recognition in a community sample of preadolescent children between the ages of 10 and 12 (N = 417; 53.6% boys). The task required children to identify a broad array of emotions from photographic stimuli depicting the eye region of the face. Stimuli were then divided into complex or basic emotions. Results demonstrated a unique association between callous-unemotional traits and complex emotions, with weaker associations with basic emotion recognition, over and above other dimensions of psychopathy.

  1. Classification of facial-emotion expression in the application of psychotherapy using Viola-Jones and Edge-Histogram of Oriented Gradient.

    PubMed

    Candra, Henry; Yuwono, Mitchell; Rifai Chai; Nguyen, Hung T; Su, Steven

    2016-08-01

    Psychotherapy requires appropriate recognition of patient's facial-emotion expression to provide proper treatment in psychotherapy session. To address the needs this paper proposed a facial emotion recognition system using Combination of Viola-Jones detector together with a feature descriptor we term Edge-Histogram of Oriented Gradients (E-HOG). The performance of the proposed method is compared with various feature sources including the face, the eyes, the mouth, as well as both the eyes and the mouth. Seven classes of basic emotions have been successfully identified with 96.4% accuracy using Multi-class Support Vector Machine (SVM). The proposed descriptor E-HOG is much leaner to compute compared to traditional HOG as shown by a significant improvement in processing time as high as 1833.33% (p-value = 2.43E-17) with a slight reduction in accuracy of only 1.17% (p-value = 0.0016).

  2. Poly(ionic liquid) based chemosensors for detection of basic amino acids in aqueous medium

    NASA Astrophysics Data System (ADS)

    Li, Xinjuan; Wang, Kai; Ma, Nana; Jia, Xianbin

    2017-09-01

    Naked-eye detection of amino acids in water is of great significance in the field of bio-analytical applications. Herein, polymerized ionic liquids (PILs) with controlled chain length structures were synthesized via reversible addition-fragmentation chain-transfer (RAFT) polymerization and post-quaternization approach. The amino acids recognition performance of PILs with different alkyl chain lengths and molecular weights was evaluated by naked-eye color change and ultraviolet-visible (UV-vis) spectral studies. These PILs were successfully used for highly sensitive and selective detection of Arg, Lys and His in water. The recognition performance was improved effectively with increased molecular weight of PILs. The biosensitivity of the PILs in water was strongly dependent on their aggregation effect and polarization effect. Highly sensitive and selective detection of amino acids was successfully accomplished by introducing positively charged pyridinium moieties and controlled RAFT radical polymerization.

  3. Social Cognition Psychometric Evaluation: Results of the Initial Psychometric Study

    PubMed Central

    Pinkham, Amy E.; Penn, David L.; Green, Michael F.; Harvey, Philip D.

    2016-01-01

    Measurement of social cognition in treatment trials remains problematic due to poor and limited psychometric data for many tasks. As part of the Social Cognition Psychometric Evaluation (SCOPE) study, the psychometric properties of 8 tasks were assessed. One hundred and seventy-nine stable outpatients with schizophrenia and 104 healthy controls completed the battery at baseline and a 2–4-week retest period at 2 sites. Tasks included the Ambiguous Intentions Hostility Questionnaire (AIHQ), Bell Lysaker Emotion Recognition Task (BLERT), Penn Emotion Recognition Task (ER-40), Relationships Across Domains (RAD), Reading the Mind in the Eyes Task (Eyes), The Awareness of Social Inferences Test (TASIT), Hinting Task, and Trustworthiness Task. Tasks were evaluated on: (i) test-retest reliability, (ii) utility as a repeated measure, (iii) relationship to functional outcome, (iv) practicality and tolerability, (v) sensitivity to group differences, and (vi) internal consistency. The BLERT and Hinting task showed the strongest psychometric properties across all evaluation criteria and are recommended for use in clinical trials. The ER-40, Eyes Task, and TASIT showed somewhat weaker psychometric properties and require further study. The AIHQ, RAD, and Trustworthiness Task showed poorer psychometric properties that suggest caution for their use in clinical trials. PMID:25943125

  4. ASERA: A spectrum eye recognition assistant for quasar spectra

    NASA Astrophysics Data System (ADS)

    Yuan, Hailong; Zhang, Haotong; Zhang, Yanxia; Lei, Yajuan; Dong, Yiqiao; Zhao, Yongheng

    2013-11-01

    Spectral type recognition is an important and fundamental step of large sky survey projects in the data reduction for further scientific research, like parameter measurement and statistic work. It tends out to be a huge job to manually inspect the low quality spectra produced from the massive spectroscopic survey, where the automatic pipeline may not provide confident type classification results. In order to improve the efficiency and effectiveness of spectral classification, we develop a semi-automated toolkit named ASERA, ASpectrum Eye Recognition Assistant. The main purpose of ASERA is to help the user in quasar spectral recognition and redshift measurement. Furthermore it can also be used to recognize various types of spectra of stars, galaxies and AGNs (Active Galactic Nucleus). It is an interactive software allowing the user to visualize observed spectra, superimpose template spectra from the Sloan Digital Sky Survey (SDSS), and interactively access related spectral line information. It is an efficient and user-friendly toolkit for the accurate classification of spectra observed by LAMOST (the Large Sky Area Multi-object Fiber Spectroscopic Telescope). The toolkit is available in two modes: a Java standalone application and a Java applet. ASERA has a few functions, such as wavelength and flux scale setting, zoom in and out, redshift estimation, spectral line identification, which helps user to improve the spectral classification accuracy especially for low quality spectra and reduce the labor of eyeball check. The function and performance of this tool is displayed through the recognition of several quasar spectra and a late type stellar spectrum from the LAMOST Pilot survey. Its future expansion capabilities are discussed.

  5. Auditory noise increases the allocation of attention to the mouth, and the eyes pay the price: An eye-tracking study.

    PubMed

    Król, Magdalena Ewa

    2018-01-01

    We investigated the effect of auditory noise added to speech on patterns of looking at faces in 40 toddlers. We hypothesised that noise would increase the difficulty of processing speech, making children allocate more attention to the mouth of the speaker to gain visual speech cues from mouth movements. We also hypothesised that this shift would cause a decrease in fixation time to the eyes, potentially decreasing the ability to monitor gaze. We found that adding noise increased the number of fixations to the mouth area, at the price of a decreased number of fixations to the eyes. Thus, to our knowledge, this is the first study demonstrating a mouth-eyes trade-off between attention allocated to social cues coming from the eyes and linguistic cues coming from the mouth. We also found that children with higher word recognition proficiency and higher average pupil response had an increased likelihood of fixating the mouth, compared to the eyes and the rest of the screen, indicating stronger motivation to decode the speech.

  6. Auditory noise increases the allocation of attention to the mouth, and the eyes pay the price: An eye-tracking study

    PubMed Central

    2018-01-01

    We investigated the effect of auditory noise added to speech on patterns of looking at faces in 40 toddlers. We hypothesised that noise would increase the difficulty of processing speech, making children allocate more attention to the mouth of the speaker to gain visual speech cues from mouth movements. We also hypothesised that this shift would cause a decrease in fixation time to the eyes, potentially decreasing the ability to monitor gaze. We found that adding noise increased the number of fixations to the mouth area, at the price of a decreased number of fixations to the eyes. Thus, to our knowledge, this is the first study demonstrating a mouth-eyes trade-off between attention allocated to social cues coming from the eyes and linguistic cues coming from the mouth. We also found that children with higher word recognition proficiency and higher average pupil response had an increased likelihood of fixating the mouth, compared to the eyes and the rest of the screen, indicating stronger motivation to decode the speech. PMID:29558514

  7. The Influence of Shyness on the Scanning of Own- and Other-Race Faces in Adults

    PubMed Central

    Wang, Qiandong; Hu, Chao; Short, Lindsey A.; Fu, Genyue

    2012-01-01

    The current study explored the relationship between shyness and face scanning patterns for own- and other-race faces in adults. Participants completed a shyness inventory and a face recognition task in which their eye movements were recorded by a Tobii 1750 eye tracker. We found that: (1) Participants’ shyness scores were negatively correlated with the fixation proportion on the eyes, regardless of the race of face they viewed. The shyer the participants were, the less time they spent fixating on the eye region; (2) High shyness participants tended to fixate significantly more than low shyness participants on the regions just below the eyes as if to avoid direct eye contact; (3) When participants were recognizing own-race faces, their shyness scores were positively correlated with the normalized criterion. The shyer they were, the more apt they were to judge the faces as novel, regardless of whether they were target or foil faces. The present results support an avoidance hypothesis of shyness, suggesting that shy individuals tend to avoid directly fixating on others’ eyes, regardless of face race. PMID:23284933

  8. Oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor: Sensing ability, TD-DFT calculations and its application as an efficient solid state sensor

    NASA Astrophysics Data System (ADS)

    Lan, Linxin; Li, Tianduo; Wei, Tao; Pang, He; Sun, Tao; Wang, Enhua; Liu, Haixia; Niu, Qingfen

    2018-03-01

    An oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor 3 T-2CN was reported. Sensor 3 T-2CN showed both naked-eye recognition and ratiometric fluorescence response for CN- with an excellent selectivity and high sensitivity. The sensing mechanism based on the nucleophilic attack of CN- on the vinyl Cdbnd C bond has been successfully confirmed by the optical measurements, 1H NMR titration, FT-IR spectra as well as the DFT/TD-DFT calculations. Moreover, the detection limit was calculated to be 0.19 μM, which is much lower than the maximum permission concentration in drinking water (1.9 μM). Importantly, test strips (filter paper and TLC plates) containing 3 T-2CN were fabricated, which could act as a practical and efficient solid state optical sensor for CN- in field measurements.

  9. Deaf children's use of clear visual cues in mindreading.

    PubMed

    Hao, Jian; Su, Yanjie

    2014-11-01

    Previous studies show that typically developing 4-year old children can understand other people's false beliefs but that deaf children of hearing families have difficulty in understanding false beliefs until the age of approximately 13. Because false beliefs are implicit mental states that are not expressed through clear visual cues in standard false belief tasks, the present study examines the hypothesis that the deaf children's developmental delay in understanding false beliefs may reflect their difficulty in understanding a spectrum of mental states that are not expressed through clear visual cues. Nine- to 13-year-old deaf children of hearing families and 4-6-year-old typically developing children completed false belief tasks and emotion recognition tasks under different cue conditions. The results indicated that after controlling for the effect of the children's language abilities, the deaf children inferred other people's false beliefs as accurately as the typically developing children when other people's false beliefs were clearly expressed through their eye-gaze direction. However, the deaf children performed worse than the typically developing children when asked to infer false beliefs with ambiguous or no eye-gaze cues. Moreover, the deaf children were capable of recognizing other people's emotions that were clearly conveyed by their facial or body expressions. The results suggest that although theory-based or simulation-based mental state understanding is typical of hearing children's theory of mind mechanism, for deaf children of hearing families, clear cue-based mental state understanding may be their specific theory of mind mechanism. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Immediate effects of anticipatory coarticulation in spoken-word recognition

    PubMed Central

    Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.

    2014-01-01

    Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179

  11. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  12. Intranasal oxytocin improves emotion recognition for youth with autism spectrum disorders.

    PubMed

    Guastella, Adam J; Einfeld, Stewart L; Gray, Kylie M; Rinehart, Nicole J; Tonge, Bruce J; Lambert, Timothy J; Hickie, Ian B

    2010-04-01

    A diagnostic hallmark of autism spectrum disorders is a qualitative impairment in social communication and interaction. Deficits in the ability to recognize the emotions of others are believed to contribute to this. There is currently no effective treatment for these problems. In a double-blind, randomized, placebo-controlled, crossover design, we administered oxytocin nasal spray (18 or 24 IU) or a placebo to 16 male youth aged 12 to 19 who were diagnosed with Autistic or Asperger's Disorder. Participants then completed the Reading the Mind in the Eyes Task, a widely used and reliable test of emotion recognition. In comparison with placebo, oxytocin administration improved performance on the Reading the Mind in the Eyes Task. This effect was also shown when analysis was restricted to the younger participants aged 12 to 15 who received the lower dose. This study provides the first evidence that oxytocin nasal spray improves emotion recognition in young people diagnosed with autism spectrum disorders. Findings suggest the potential of earlier intervention and further evaluation of oxytocin nasal spray as a treatment to improve social communication and interaction in young people with autism spectrum disorders. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  13. A Multifunctional Bimetallic Molecular Device for Ultrasensitive Detection, Naked-Eye Recognition, and Elimination of Cyanide Ions.

    PubMed

    Chow, Cheuk-Fai; Ho, Pui-Yu; Wong, Wing-Leung; Gong, Cheng-Bin

    2015-09-07

    A new bimetallic Fe(II) -Cu(II) complex was synthesized, characterized, and applied as a selective and sensitive sensor for cyanide detection in water. This complex is the first multifunctional device that can simultaneously detect cyanide ions in real water samples, amplify the colorimetric signal upon detection for naked-eye recognition at the parts-per-million (ppb) level, and convert the toxic cyanide ion into the much safer cyanate ion in situ. The mechanism of the bimetallic complex for high-selectivity recognition and signaling toward cyanide ions was investigated through a series of binding kinetics of the complex with different analytes, including CN(-) , SO4 (2-) , HCO3 (-) , HPO4 (2-) , N3 (-) , CH3 COO(-) , NCS(-) , NO3 (-) , and Cl(-) ions. In addition, the use of the indicator/catalyst displacement assay (ICDA) is demonstrated in the present system in which one metal center acts as a receptor and inhibitor and is bridged to another metal center that is responsible for signal transduction and catalysis, thus showing a versatile approach to the design of new multifunctional devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. 20 CFR 408.1215 - How do you establish eligibility for Federally administered State recognition payments?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Federally administered State recognition payments? 408.1215 Section 408.1215 Employees' Benefits SOCIAL... Recognition Payments § 408.1215 How do you establish eligibility for Federally administered State recognition... deemed to have filed an application for any Federally administered State recognition payments for which...

  15. Are readers of our face readers of our minds? Dogs (Canis familiaris) show situation-dependent recognition of human's attention.

    PubMed

    Gácsi, Márta; Miklósi, Adám; Varga, Orsolya; Topál, József; Csányi, Vilmos

    2004-07-01

    The ability of animals to use behavioral/facial cues in detection of human attention has been widely investigated. In this test series we studied the ability of dogs to recognize human attention in different experimental situations (ball-fetching game, fetching objects on command, begging from humans). The attentional state of the humans was varied along two variables: (1) facing versus not facing the dog; (2) visible versus non-visible eyes. In the first set of experiments (fetching) the owners were told to take up different body positions (facing or not facing the dog) and to either cover or not cover their eyes with a blindfold. In the second set of experiments (begging) dogs had to choose between two eating humans based on either the visibility of the eyes or direction of the face. Our results show that the efficiency of dogs to discriminate between "attentive" and "inattentive" humans depended on the context of the test, but they could rely on the orientation of the body, the orientation of the head and the visibility of the eyes. With the exception of the fetching-game situation, they brought the object to the front of the human (even if he/she turned his/her back towards the dog), and preferentially begged from the facing (or seeing) human. There were also indications that dogs were sensitive to the visibility of the eyes because they showed increased hesitative behavior when approaching a blindfolded owner, and they also preferred to beg from the person with visible eyes. We conclude that dogs are able to rely on the same set of human facial cues for detection of attention, which form the behavioral basis of understanding attention in humans. Showing the ability of recognizing human attention across different situations dogs proved to be more flexible than chimpanzees investigated in similar circumstances.

  16. Emotion recognition impairment in traumatic brain injury compared with schizophrenia spectrum: similar deficits with different origins.

    PubMed

    Mancuso, Mauro; Magnani, Nadia; Cantagallo, Anna; Rossi, Giulia; Capitani, Donatella; Galletti, Vania; Cardamone, Giuseppe; Robertson, Ian Hamilton

    2015-02-01

    The aim of our study was to identify the common and separate mechanisms that might underpin emotion recognition impairment in patients with traumatic brain injury (TBI) and schizophrenia (Sz) compared with healthy controls (HCs). We recruited 21 Sz outpatients, 24 severe TBI outpatients, and 38 HCs, and we used eye-tracking to compare facial emotion processing performance. Both Sz and TBI patients were significantly poorer at recognizing facial emotions compared with HC. Sz patients showed a different way of exploring the Pictures of Facial Affects stimuli and were significantly worse in recognition of neutral expressions. Selective or sustained attention deficits in TBI may reduce efficient emotion recognition, whereas in Sz, there is a more strategic deficit underlying the observed problem. There would seem to be scope for adjustment of effective rehabilitative training focused on emotion recognition.

  17. Art critic: Multisignal vision and speech interaction system in a gaming context.

    PubMed

    Reale, Michael J; Liu, Peng; Yin, Lijun; Canavan, Shaun

    2013-12-01

    True immersion of a player within a game can only occur when the world simulated looks and behaves as close to reality as possible. This implies that the game must correctly read and understand, among other things, the player's focus, attitude toward the objects/persons in focus, gestures, and speech. In this paper, we proposed a novel system that integrates eye gaze estimation, head pose estimation, facial expression recognition, speech recognition, and text-to-speech components for use in real-time games. Both the eye gaze and head pose components utilize underlying 3-D models, and our novel head pose estimation algorithm uniquely combines scene flow with a generic head model. The facial expression recognition module uses the local binary patterns with three orthogonal planes approach on the 2-D shape index domain rather than the pixel domain, resulting in improved classification. Our system has also been extended to use a pan-tilt-zoom camera driven by the Kinect, allowing us to track a moving player. A test game, Art Critic, is also presented, which not only demonstrates the utility of our system but also provides a template for player/non-player character (NPC) interaction in a gaming context. The player alters his/her view of the 3-D world using head pose, looks at paintings/NPCs using eye gaze, and makes an evaluation based on the player's expression and speech. The NPC artist will respond with facial expression and synthetic speech based on its personality. Both qualitative and quantitative evaluations of the system are performed to illustrate the system's effectiveness.

  18. Familiarity and recollection produce distinct eye movement, pupil and medial temporal lobe responses when memory strength is matched.

    PubMed

    Kafkas, Alexandros; Montaldi, Daniela

    2012-11-01

    Two experiments explored eye measures (fixations and pupil response patterns) and brain responses (BOLD) accompanying the recognition of visual object stimuli based on familiarity and recollection. In both experiments, the use of a modified remember/know procedure led to high confidence and matched accuracy levels characterising strong familiarity (F3) and recollection (R) responses. In Experiment 1, visual scanning behaviour at retrieval distinguished familiarity-based from recollection-based recognition. Recollection, relative to strength-matched familiarity, involved significantly larger pupil dilations and more dispersed fixation patterns. In Experiment 2, the hippocampus was selectively activated for recollected stimuli, while no evidence of activation was observed in the hippocampus for strong familiarity of matched accuracy. Recollection also activated the parahippocampal cortex (PHC), while the adjacent perirhinal cortex (PRC) was actively engaged in response to strong familiarity (than to recollection). Activity in prefrontal and parietal areas differentiated familiarity and recollection in both the extent and the magnitude of activity they exhibited, while the dorsomedial thalamus showed selective familiarity-related activity, and the ventrolateral and anterior thalamus selective recollection-related activity. These findings are consistent with the view that the hippocampus and PRC play contrasting roles in supporting recollection and familiarity and that these differences are not a result of differences in memory strength. Overall, the combined pupil dilation, eye movement and fMRI data suggest the operation of recognition mechanisms drawing differentially on familiarity and recollection, whose neural bases are distinct within the MTL. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Differential involvement of right and left hemisphere in individual recognition in the domestic chick.

    PubMed

    Vallortigara, G; Andrew, R J

    1994-12-01

    Right hemisphere advantage in individual recognition (as shown by differences between response to strangers and companions) is clear in the domestic chick. Chicks using the left eye (and so, thanks to the complete optic decussation, predominantly the right hemisphere) discriminate between stranger and companion. Chicks using the right eye discriminate less clearly or not at all. The ability of left eyed chicks to respond to differences between strangers and companions stimuli is associated with a more general ability to detect and respond to novelty: this difference between left and right eyed chicks also holds for stimuli which are not social partners. The right hemisphere also shows advantage in tasks with a spatial component (topographical learning; response to change in the spatial context of a stimulus) in the chick, as in humans. Similar specialisations of the two hemispheres are also revealed in tests which involve olfactory cues presented by social partners. The special properties of the left hemisphere are less well established in the chick. Evidence reviewed here suggests that it tends to respond to selected properties of a stimulus and to use them to assign it to a category; such assignment then allows an appropriate response. When exposed to an imprinting stimulus (visual or auditory) a chick begins by using right eye or ear (suggesting left hemisphere control), and then shifts to the left eye or ear (suggesting right hemisphere control), as exposure continues. The left hemisphere here is thus involved whilst behaviour is dominated by vigorous response to releasing stimuli presented by an object. Subsequent learning about the full detailed properties of the stimulus, which is crucial for individual recognition, may explain the shift to right hemisphere control after prolonged exposure to the social stimulus. There is a marked sex difference in choice tests: females tend to choose companions in tests where males choose strangers. It is possible that this difference is specifically caused by stronger motivation to sustain social contact in female chicks, for which there is extensive evidence. However, sex differences in response to change in familiar stimuli are also marked in tests which do not involve social partners. Finally, in both sexes there are two periods during development in which there age-dependent shifts in bias to use one or other hemisphere. These periods (days 3-5 and 8-11) coincide with two major changes in the social behaviour of chicks reared by a hen in a normal brood. It is argued that one function of these periods is to bring fully into play the hemisphere most appropriate to the type of response to, and learning about, social partners which is needed at particular points in development. Parallels are discussed between the involvement of lateralised processes in the recognition of social partners in chicks and humans. Copyright © 1994. Published by Elsevier B.V.

  20. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  1. Age differences in accuracy and choosing in eyewitness identification and face recognition.

    PubMed

    Searcy, J H; Bartlett, J C; Memon, A

    1999-05-01

    Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.

  2. A quasi-randomized feasibility pilot study of specific treatments to improve emotion recognition and mental-state reasoning impairments in schizophrenia.

    PubMed

    Marsh, Pamela Jane; Polito, Vince; Singh, Subba; Coltheart, Max; Langdon, Robyn; Harris, Anthony W

    2016-10-24

    Impaired ability to make inferences about what another person might think or feel (i.e., social cognition impairment) is recognised as a core feature of schizophrenia and a key determinant of the poor social functioning that characterizes this illness. The development of treatments to target social cognitive impairments as a causal factor of impaired functioning in schizophrenia is of high priority. In this study, we investigated the acceptability, feasibility, and limited efficacy of 2 programs targeted at specific domains of social cognition in schizophrenia: "SoCog" Mental-State Reasoning Training (SoCog-MSRT) and "SoCog" Emotion Recognition Training (SoCog-ERT). Thirty-one participants with schizophrenia or schizoaffective disorder were allocated to either SoCog-MSRT (n = 19) or SoCog-ERT (n = 12). Treatment comprised 12 twice-weekly sessions for 6 weeks. Participants underwent assessments of social cognition, neurocognition and symptoms at baseline, post-training and 3-months after completing training. Attendance at training sessions was high with an average of 89.29 % attendance in the SoCog-MSRT groups and 85.42 % in the SoCog-ERT groups. Participants also reported the 2 programs as enjoyable and beneficial. Both SoCog-MSRT and SoCog-ERT groups showed increased scores on a false belief reasoning task and the Reading the Mind in the Eyes test. The SoCog-MSRT group also showed reduced personalising attributional biases in a small number of participants, while the SoCog-ERT group showed improved emotion recognition. The results are promising and support the feasibility and acceptability of the 2 SoCog programs as well as limited efficacy to improve social cognitive abilities in schizophrenia. There is also some evidence that skills for the recognition of basic facial expressions need specific training. Australian New Zealand Clinical Trials Registry ACTRN12613000978763 . Retrospectively registered 3/09/2013.

  3. The influence of bilingualism on the preference for the mouth region of dynamic faces.

    PubMed

    Ayneto, Alba; Sebastian-Galles, Nuria

    2017-01-01

    Bilingual infants show an extended period of looking at the mouth of talking faces, which provides them with additional articulatory cues that can be used to boost the challenging situation of learning two languages (Pons, Bosch & Lewkowicz, 2015). However, the eye region also provides fundamental cues for emotion perception and recognition, as well as communication. Here, we explored whether the adaptations resulting from learning two languages are specific to linguistic content or if they also influence the focus of attention when looking at dynamic faces. We recorded the eye gaze of bilingual and monolingual infants (8- and 12-month-olds) while watching videos of infants and adults portraying different emotional states (neutral, crying, and laughing). When looking at infant faces, bilinguals looked longer at the mouth region as compared to monolinguals regardless of age. However, when presented with adult faces, 8-month-old bilingual infants looked longer at the mouth region and less at the eye region compared to 8-month-old monolingual infants, but no effect of language exposure was found at 12 months of age. These findings suggest that the bias to the mouth region in bilingual infants at 8 months of age can be generalized to other audiovisual dynamic faces that do not contain linguistic information. We discuss the potential implications of such bias in early social and communicative development. © 2016 John Wiley & Sons Ltd.

  4. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  5. 20 CFR 408.1205 - How can a State have SSA administer its State recognition payment program?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... recognition payment program? 408.1205 Section 408.1205 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SPECIAL BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1205 How can a State have SSA administer its State recognition payment program? A State (or...

  6. Eye safety related to near infrared radiation exposure to biometric devices.

    PubMed

    Kourkoumelis, Nikolaos; Tzaphlidou, Margaret

    2011-03-01

    Biometrics has become an emerging field of technology due to its intrinsic security features concerning the identification of individuals by means of measurable biological characteristics. Two of the most promising biometric modalities are iris and retina recognition, which primarily use nonionizing radiation in the infrared region. Illumination of the eye is achieved by infrared light emitting diodes (LEDs). Even if few LED sources are capable of causing direct eye damage as they emit incoherent light, there is a growing concern about the possible use of LED arrays that might pose a potential threat. Exposure to intense coherent infrared radiation has been proven to have significant effects on living tissues. The purpose of this study is to explore the biological effects arising from exposing the eye to near infrared radiation with reference to international legislation.

  7. Social emotion recognition, social functioning, and attempted suicide in late-life depression.

    PubMed

    Szanto, Katalin; Dombrovski, Alexandre Y; Sahakian, Barbara J; Mulsant, Benoit H; Houck, Patricia R; Reynolds, Charles F; Clark, Luke

    2012-03-01

    : Lack of feeling connected and poor social problem solving have been described in suicide attempters. However, cognitive substrates of this apparent social impairment in suicide attempters remain unknown. One possible deficit, the inability to recognize others' complex emotional states has been observed not only in disorders characterized by prominent social deficits (autism-spectrum disorders and frontotemporal dementia) but also in depression and normal aging. This study assessed the relationship between social emotion recognition, problem solving, social functioning, and attempted suicide in late-life depression. : There were 90 participants: 24 older depressed suicide attempters, 38 nonsuicidal depressed elders, and 28 comparison subjects with no psychiatric history. We compared performance on the Reading the Mind in the Eyes test and measures of social networks, social support, social problem solving, and chronic interpersonal difficulties in these three groups. : Suicide attempters committed significantly more errors in social emotion recognition and showed poorer global cognitive performance than elders with no psychiatric history. Attempters had restricted social networks: they were less likely to talk to their children, had fewer close friends, and did not engage in volunteer activities, compared to nonsuicidal depressed elders and those with no psychiatric history. They also reported a pattern of struggle against others and hostility in relationships, felt a lack of social support, perceived social problems as impossible to resolve, and displayed a careless/impulsive approach to problems. : Suicide attempts in depressed elders were associated with poor social problem solving, constricted social networks, and disruptive interpersonal relationships. Impaired social emotion recognition in the suicide attempter group was related.

  8. On the invariance of EEG-based signatures of individuality with application in biometric identification.

    PubMed

    Yunqi Wang; Najafizadeh, Laleh

    2016-08-01

    One of the main challenges in EEG-based biometric systems is to extract reliable signatures of individuality from recorded EEG data that are also invariant against time. In this paper, we investigate the invariability of features that are extracted based on the spatial distribution of the spectral power of EEG data corresponding to 2-second eyes-closed resting-state (ECRS) recording, in different scenarios. Eyes-closed resting-state EEG signals in 4 healthy adults are recorded in two different sessions with an interval of at least one week between sessions. The performance in terms of correct recognition rate (CRR) is examined when the training and testing datasets are chosen from the same recording session, and when the training and testing datasets are chosen from different sessions. It is shown that an CRR of 92% can be achieved based on the proposed features when the training and testing datasets are taken from different sessions. To reduce the number of recording channels, principal component analysis (PCA) is also employed to identify channels that carry the most discriminatory information across individuals. High CRR is obtained based on the data from channels mostly covering the occipital region. The results suggest that features based on the spatial distribution of the spectral power of the short-time (e.g. 2 seconds) ECRS recordings can have great potentials in EEG-based biometric identification systems.

  9. Opening a Window into Reading Development: Eye Movements’ Role Within a Broader Literacy Research Framework

    PubMed Central

    Miller, Brett; O’Donnell, Carol

    2013-01-01

    The cumulative body of eye movement research provides significant insight into how readers process text. The heart of this work spans roughly 40 years reflecting the maturity of both the topics under study and experimental approaches used to investigate reading. Recent technological advancements offer increased flexibility to the field providing the potential to more concertedly study reading and literacy from an individual differences perspective. Historically, eye movement research focused far less on developmental issues related to individual differences in reading; however, this issue and the broader change it represents signal a meaningful transition inclusive of individual differences. The six papers in this special issue signify the recent, increased attention to and recognition of eye movement research’s transition to emphasize individual differences in reading while appreciating early contributions (e.g., Rayner, 1986) in this direction. We introduce these six papers and provide some historical context for the use of eye movement methodology to examine reading and context for the eye movement field’s early transition to examining individual differences, culminating in future research recommendations. PMID:24391304

  10. Opening a Window into Reading Development: Eye Movements' Role Within a Broader Literacy Research Framework.

    PubMed

    Miller, Brett; O'Donnell, Carol

    2013-01-01

    The cumulative body of eye movement research provides significant insight into how readers process text. The heart of this work spans roughly 40 years reflecting the maturity of both the topics under study and experimental approaches used to investigate reading. Recent technological advancements offer increased flexibility to the field providing the potential to more concertedly study reading and literacy from an individual differences perspective. Historically, eye movement research focused far less on developmental issues related to individual differences in reading; however, this issue and the broader change it represents signal a meaningful transition inclusive of individual differences. The six papers in this special issue signify the recent, increased attention to and recognition of eye movement research's transition to emphasize individual differences in reading while appreciating early contributions (e.g., Rayner, 1986) in this direction. We introduce these six papers and provide some historical context for the use of eye movement methodology to examine reading and context for the eye movement field's early transition to examining individual differences, culminating in future research recommendations.

  11. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  12. Keeping an eye on the truth? Pupil size changes associated with recognition memory.

    PubMed

    Heaver, Becky; Hutton, Sam B

    2011-05-01

    During recognition memory tests participants' pupils dilate more when they view old items compared to novel items. We sought to replicate this "pupil old/new effect" and to determine its relationship to participants' responses. We compared changes in pupil size during recognition when participants were given standard recognition memory instructions, instructions to feign amnesia, and instructions to report all items as new. Participants' pupils dilated more to old items compared to new items under all three instruction conditions. This finding suggests that the increase in pupil size that occurs when participants encounter previously studied items is not under conscious control. Given that pupil size can be reliably and simply measured, the pupil old/new effect may have potential in clinical settings as a means for determining whether patients are feigning memory loss.

  13. Automated facial recognition and candidate list rank change of computer generated facial approximations generated with multiple eye orb positions.

    PubMed

    Parks, Connie L; Monson, Keith L

    2016-09-01

    Expanding on research previously reported by the authors, this study further examines the recognizability of ReFace facial approximations generated with the following eye orb positions: (i) centrally within the bony eye socket, (ii) 1.0mm superior and 2.0mm lateral relative to center, and (iii) 1.0mm superior and 2.5mm lateral relative to center. Overall, 81% of the test subjects' approximation ranks improved with the use of either of the two supero-lateral eye orbs. Highly significant performance differences (p<0.01) were observed between the approximations with centrally positioned eye orbs (i) and approximations with the eye orbs placed in the supero-laterally positions (ii and iii). Noteworthy was the observation that in all cases when the best rank for an approximation was obtained with the eye orbs in position (iii), the second best rank was achieved with the eye orbs in position (ii). A similar pattern was also observed when the best rank was obtained with the eye orbs in position (ii), with 60% of the second best ranks observed in position (iii). It is argued, therefore, that an approximation constructed with the eye orbs placed in either of the two supero-lateral positions may be more effective and operationally informative than centrally positioned orbs. Copyright © 2016. Published by Elsevier Ireland Ltd.

  14. Development towards compact nitrocellulose interferometric biochips for dry eye diagnosis based on MMP9, S100A6 and CST4 biomarkers using a Point-of-Care device

    NASA Astrophysics Data System (ADS)

    Santamaría, Beatriz; Laguna, María. Fe; López-Romero, David; López-Hernandez, A.; Sanza, F. J.; Lavín, A.; Casquel, R.; Maigler, M.; Holgado, M.

    2018-02-01

    A novel compact optical biochip based on a thin layer-sensing BICELL surface of nitrocellulose is used for in-situ labelfree detection of dry eye disease (DED). In this work the development of a compact biosensor that allows obtaining quantitative diagnosis with a limited volume of sample is reported. The designed sensors can be analyzed with an optical integrated Point-of-Care read-out system based on the "Increase Relative Optical Power" principle which enhances the performance and Limit of Detection. Several proteins involved with dry eye dysfunction have been validated as biomarkers. Presented biochip analyzes three of those biomarkers: MMP9, S100A6 and CST4. BICELLs based on nitrocellulose permit to immobilize antibodies for each biomarker recognition. The optical response obtained from the biosensor through the readout platform is capable to recognize specifically the desired proteins in the concentrations range for control eye (CE) and dry eye syndrome (DES). Preliminary results obtained will allow the development of a dry eye detection device useful in the area of ophthalmology and applicable to other possible diseases related to the eye dysfunction.

  15. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology

    PubMed Central

    Fernandez-Mendez, Felipe; Barcala-Furelos, Roberto; Padron-Cabo, Alexis; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation. PMID:28758128

  16. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology.

    PubMed

    Fernandez-Mendez, Felipe; Saez-Gallego, Nieves Maria; Barcala-Furelos, Roberto; Abelairas-Gomez, Cristian; Padron-Cabo, Alexis; Perez-Ferreiros, Alexandra; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation.

  17. Children with Autism Spectrum Disorder scan own-race faces differently from other-race faces.

    PubMed

    Yi, Li; Quinn, Paul C; Fan, Yuebo; Huang, Dan; Feng, Cong; Joseph, Lisa; Li, Jiao; Lee, Kang

    2016-01-01

    It has been well documented that people recognize and scan other-race faces differently from faces of their own race. The current study examined whether this cross-racial difference in face processing found in the typical population also exists in individuals with Autism Spectrum Disorder (ASD). Participants included 5- to 10-year-old children with ASD (n=29), typically developing (TD) children matched on chronological age (n=29), and TD children matched on nonverbal IQ (n=29). Children completed a face recognition task in which they were asked to memorize and recognize both own- and other-race faces while their eye movements were tracked. We found no recognition advantage for own-race faces relative to other-race faces in any of the three groups. However, eye-tracking results indicated that, similar to TD children, children with ASD exhibited a cross-racial face-scanning pattern: they looked at the eyes of other-race faces longer than at those of own-race faces, whereas they looked at the mouth of own-race faces longer than at that of other-race faces. The findings suggest that although children with ASD have difficulty with processing some aspects of faces, their ability to process face race information is relatively spared. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. An ancient eye test--using the stars.

    PubMed

    Bohigian, George M

    2008-01-01

    Vision testing in ancient times was as important as it is today. The predominant vision testing in some cultures was the recognition and identification of constellations and celestial bodies of the night sky. A common ancient naked eye test used the double star of the Big Dipper in the constellation Ursa Major or the Big Bear. The second star from the end of the handle of the Big Dipper is an optical double star. The ability to perceive this separation of these two stars, Mizar and Alcor, was considered a test of good vision and was called the "test" or presently the Arab Eye Test. This article is the first report of the correlation of this ancient eye test to the 20/20 line in the current Snellen visual acuity test. This article describes the astronomy, origin, history, and the practicality of this test and how it correlates with the present day Snellen visual acuity test.

  19. Septic safe interactions with smart glasses in health care.

    PubMed

    Czuszynski, K; Ruminski, J; Kocejko, T; Wtorek, J

    2015-08-01

    In this paper, septic safe methods of interaction with smart glasses, due to the health care environment applications consideration, are presented. The main focus is on capabilities of an optical, proximity-based gesture sensor and eye-tracker input systems. The design of both interfaces is being adapted to the open smart glasses platform that is being developed under the eGlasses project. Preliminary results obtained from the proximity sensor show that the recognition of different static and dynamic hand gestures is promising. The experiments performed for the eye-tracker module shown the possibility of interaction with simple Graphical User Interface provided by the near-to-eye display. Research leads to the conclusion of attractiveness of collaborative interfaces for interaction with smart glasses.

  20. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    PubMed

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  2. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  3. The use of illustration to improve older adults' comprehension of health-related information: is it helpful?

    PubMed

    Liu, Chiung-ju; Kemper, Susan; McDowd, Joan

    2009-08-01

    To examine whether explanatory illustrations can improve older adults' comprehension of written health information. Six short health-related texts were selected from websites and pamphlets. Young and older adults were randomly assigned to read health-related texts alone or texts accompanied by explanatory illustrations. Eye movements were recorded while reading. Word recognition, text comprehension, and comprehension of the illustrations were assessed after reading. Older adults performed as well as or better than young adults on the word recognition and text comprehension measures. However, older adults performed less well than young adults on the illustration comprehension measures. Analysis of readers' eye movements showed that older adults spent more time reading illustration-related phrases and fixating on the illustrations than did young adults, yet had poorer comprehension of the illustrations. Older adults might not benefit from text illustrations because illustrations can be difficult to integrate with the text. Health practitioners should not assume that illustrations will increase older adults' comprehension of health information.

  4. Incidence of secondary glaucoma in behcet disease.

    PubMed

    Elgin, Ufuk; Berker, Nilufer; Batman, Aygen

    2004-12-01

    To determine the incidence of secondary glaucoma in Behcet disease. A total of 230 eyes of 129 patients with Behcet disease, were examined in uveitis and glaucoma clinics of Ankara Social Security Eye Hospital between January 1997 and September 2002. The data from all patients were investigated both retrospectively and prospectively. The mean age of 129 patients was 34.2 +/- 7.4 years (range, 18 to 55 years). In 22 patients (17%), the disease was diagnosed on the basis of the ocular findings, while in the remaining 107 patients (83%), the period between the diagnosis of Behcet disease and the onset of the ocular symptoms was 23.3 +/- 17 months (range, 1 month to 5.3 years); 122 eyes (53%) had the episodes of acute recurrent iridocyclitis, while 108 eyes (47%) developed chronic posterior uveitis, including vitreitis, retinitis, vasculitis, or optic nerve involvement. Secondary glaucoma was diagnosed in 25 eyes (10.9%); 11 eyes (44%) with steroid or inflammation induced open angle glaucoma, 6 eyes (24%) with partial angle-closure glaucoma and peripheral anterior synechiae, 5 eyes (20%) with angle closure glaucoma, peripheral anterior synechiae, and pupil block and 3 eyes (12%) with neovascular glaucoma. The treatments included YAG-laser iridotomy in 5 eyes, diode-laser cyclodestruction in 3 eyes, primary trabeculectomies with mitomycin-c in 4 eyes, secondary trabeculectomies with mitomycin-c in 2 eyes, Ahmed valve implantations in 2 eyes, and cyclocryotherapy in 3 eyes. We suggest that secondary glaucoma is a common and serious complication of Behcet disease. It develops as a result of multiple factors, generally triggered by recurrent intraocular inflammation. Early recognition and treatment of these factors have vital importance to avoid the visual morbidity.

  5. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  6. Shape and Color Features for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.

    2012-01-01

    A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.

  7. Social Cognition Psychometric Evaluation: Results of the Final Validation Study.

    PubMed

    Pinkham, Amy E; Harvey, Philip D; Penn, David L

    2018-06-06

    Social cognition is increasingly recognized as an important treatment target in schizophrenia; however, the dearth of well-validated measures that are suitable for use in clinical trials remains a significant limitation. The Social Cognition Psychometric Evaluation (SCOPE) study addresses this need by systematically evaluating the psychometric properties of promising measures. In this final phase of SCOPE, eight new or modified tasks were evaluated. Stable outpatients with schizophrenia (n = 218) and healthy controls (n = 154) completed the battery at baseline and 2-4 weeks later across three sites. Tasks included the Bell Lysaker Emotion Recognition Task (BLERT), Penn Emotion Recognition Task (ER-40), Reading the Mind in the Eyes Task (Eyes), The Awareness of Social Inferences Test (TASIT), Hinting Task, Mini Profile of Nonverbal Sensitivity (MiniPONS), Social Attribution Task-Multiple Choice (SAT-MC), and Intentionality Bias Task (IBT). BLERT and ER-40 modifications included response time and confidence ratings. The Eyes task was modified to include definitions of terms and TASIT to include response time. Hinting was scored with more stringent criteria. MiniPONS, SAT-MC, and IBT were new to this phase. Tasks were evaluated on (1) test-retest reliability, (2) utility as a repeated measure, (3) relationship to functional outcome, (4) practicality and tolerability, (5) sensitivity to group differences, and (6) internal consistency. Hinting, BLERT, and ER-40 showed the strongest psychometric properties and are recommended for use in clinical trials. Eyes, TASIT, and IBT showed somewhat weaker psychometric properties and require further study. MiniPONS and SAT-MC showed poorer psychometric properties that suggest caution for their use in clinical trials.

  8. Losing face: impaired discrimination of featural and configural information in the mouth region of an inverted face.

    PubMed

    Tanaka, James W; Kaiser, Martha D; Hagen, Simen; Pierce, Lara J

    2014-05-01

    Given that all faces share the same set of features-two eyes, a nose, and a mouth-that are arranged in similar configuration, recognition of a specific face must depend on our ability to discern subtle differences in its featural and configural properties. An enduring question in the face-processing literature is whether featural or configural information plays a larger role in the recognition process. To address this question, the face dimensions task was designed, in which the featural and configural properties in the upper (eye) and lower (mouth) regions of a face were parametrically and independently manipulated. In a same-different task, two faces were sequentially presented and tested in their upright or in their inverted orientation. Inversion disrupted the perception of featural size (Exp. 1), featural shape (Exp. 2), and configural changes in the mouth region, but it had relatively little effect on the discrimination of featural size and shape and configural differences in the eye region. Inversion had little effect on the perception of information in the top and bottom halves of houses (Exp. 3), suggesting that the lower-half impairment was specific to faces. Spatial cueing to the mouth region eliminated the inversion effect (Exp. 4), suggesting that participants have a bias to attend to the eye region of an inverted face. The collective findings from these experiments suggest that inversion does not differentially impair featural or configural face perceptions, but rather impairs the perception of information in the mouth region of the face.

  9. Recognizing Dynamic Faces in Malaysian Chinese Participants.

    PubMed

    Tan, Chrystalle B Y; Sheppard, Elizabeth; Stephen, Ian D

    2016-03-01

    High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies. © The Author(s) 2015.

  10. The recognition of emotional expression in prosopagnosia: decoding whole and part faces.

    PubMed

    Stephan, Blossom Christa Maree; Breen, Nora; Caine, Diana

    2006-11-01

    Prosopagnosia is currently viewed within the constraints of two competing theories of face recognition, one highlighting the analysis of features, the other focusing on configural processing of the whole face. This study investigated the role of feature analysis versus whole face configural processing in the recognition of facial expression. A prosopagnosic patient, SC made expression decisions from whole and incomplete (eyes-only and mouth-only) faces where features had been obscured. SC was impaired at recognizing some (e.g., anger, sadness, and fear), but not all (e.g., happiness) emotional expressions from the whole face. Analyses of his performance on incomplete faces indicated that his recognition of some expressions actually improved relative to his performance on the whole face condition. We argue that in SC interference from damaged configural processes seem to override an intact ability to utilize part-based or local feature cues.

  11. A gallery approach for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Yoldash, Rashiduddin; Boehnen, Christopher

    2015-05-01

    It has been proven that hamming distance score between frontal and off-angle iris images of same eye differs in iris recognition system. The distinction of hamming distance score is caused by many factors such as image acquisition angle, occlusion, pupil dilation, and limbus effect. In this paper, we first study the effect of the angle variations between iris plane and the image acquisition systems. We present how hamming distance changes for different off-angle iris images even if they are coming from the same iris. We observe that increment in acquisition angle of compared iris images causes the increment in hamming distance. Second, we propose a new technique in off-angle iris recognition system that includes creating a gallery of different off-angle iris images (such as, 0, 10, 20, 30, 40, and 50 degrees) and comparing each probe image with these gallery images. We will show the accuracy of the gallery approach for off-angle iris recognition.

  12. A Toolkit for Eye Recognition of LAMOST Spectroscopy

    NASA Astrophysics Data System (ADS)

    Yuan, H.; Zhang, H.; Zhang, Y.; Lei, Y.; Dong, Y.; Zhao, Y.

    2014-05-01

    The Large sky Area Multi-Object fiber Spectroscopic Telescope (LAMOST, also named the Guo Shou Jing Telescope) has finished the pilot survey and now begun the normal survey by the end of 2012 September. There have already been millions of targets observed, including thousands of quasar candidates. Because of the difficulty in the automatic identification of quasar spectra, eye recognition is always necessary and efficient. However massive spectra identification by eye is a huge job. In order to improve the efficiency and effectiveness of spectra , a toolkit for eye recognition of LAMOST spectroscopy is developed. Spectral cross-correlation templates from the Sloan Digital Sky Survey (SDSS) are applied as references, including O star, O/B transition star, B star, A star, F/A transition star, F star, G star, K star, M1 star, M3 star,M5 star,M8 star, L1 star, magnetic white dwarf, carbon star, white dwarf, B white dwarf, low metallicity K sub-dwarf, "Early-type" galaxy, galaxy, "Later-type" galaxy, Luminous Red Galaxy, QSO, QSO with some BAL activity and High-luminosity QSO. By adjusting the redshift and flux ratio of the template spectra in an interactive graphic interface, the spectral type of the target can be discriminated in a easy and feasible way and the redshift is estimated at the same time with a precision of about millesimal. The advantage of the tool in dealing with low quality spectra is indicated. Spectra from the Pilot Survey of LAMSOT are applied as examples and spectra from SDSS are also tested from comparison. Target spectra in both image format and fits format are supported. For convenience several spectra accessing manners are provided. All the spectra from LAMOST pilot survey can be located and acquired via the VOTable files on the internet as suggested by International Virtual Observatory Alliance (IVOA). After the construction of the Simple Spectral Access Protocol (SSAP) service by the Chinese Astronomical Data Center (CAsDC), spectra can be obtained and analyzed in a more efficient way.

  13. [Synthesis and Spectroscopic Study of a Chemosensor for Naked Eye Recognition of Cu2+ and Hg2+].

    PubMed

    Cao, Li; Qian, Ya-ao; Huang, Yan; Cao, Juan; Jia, Chun-man; Liu, Chun-ling; Zhang, Qi; Lu, Zheng-rong

    2015-07-01

    Compound L, as the procedural sensor for the detection of Cu2+ and Hg2+, was designed and synthesized based on the coumarin-modified rhodamine derivative. The structure of compound L was characterized by NMR, high resolution mass spectrometry and infrared method. Its sensing behavior toward various metal ions was investigated with absorbance methods. The study found that L had good selectivity and sensitivity for Cu2+. When addition of various metal ions (Zn2+, Hg2+, Cu2+, Fe3+, Cd2+, CO2+, Ni2+, Mg2+, Ca2+, Al3+, La3+, K+, Na+, Mn2+, Pb2+ and Ag+), only Cu2+ could induce a visible change of solution from colourless to pink and a new absorption band centered at 534 nm appear, which indicated that compound L could be used for the naked eye detection of Cu2+. From UV titration, the detection limit was about 1.9 X 10(-8) mol x L(-1). Test strips based on L were fabricated, and this test strips could act as a convenient and efficient Cu2+ test kit. The binding ratio of the complex of L-Cu2+ was 1:1 according to the Job's plot and high resolution mass spectrometer (HRMS) experiments. Moreover, Upon addition of 1 equiv. EDTA to the mixture of L and Cu2+ in DMSO solution, colour changed from pink to almost colourless, indicating that the EDTA replaced the receptor L to coordinate with Cu2+. Therefore, L could be classified as a reversible sensor for Cu2+. In addition, when adding Hg2+ to L-Cu2+ complexes, a visible change of solution from pink to colourless was observed, while other metal ions didn't cause this change. Thus, L-Cu2+ complex also could be used for the naked eye recognition of Hg2+, and the detection limit was calculated about 2.9 x 10(-1) mol x L(-1) according to the UV titration. Consequently, this procedural sensor L could be use for the orderly naked eye recognition of Cu2+ and Hg2+.

  14. Interaction between Phonological and Semantic Representations: Time Matters

    ERIC Educational Resources Information Center

    Chen, Qi; Mirman, Daniel

    2015-01-01

    Computational modeling and eye-tracking were used to investigate how phonological and semantic information interact to influence the time course of spoken word recognition. We extended our recent models (Chen & Mirman, 2012; Mirman, Britt, & Chen, 2013) to account for new evidence that competition among phonological neighbors influences…

  15. Censorship or Selection?

    ERIC Educational Resources Information Center

    Kelly, Patricia P., Ed.; Small, Robert C., Jr., Ed.

    1986-01-01

    Representing the views of persons from a variety of fields including parents, educators, authors, librarians, and publishers, the papers in this journal issue explore the fine line between censorship (with an eye toward silencing ideas) and selection (with the recognition that just as literature can enlighten it can also degrade). Following an…

  16. Latent variable method for automatic adaptation to background states in motor imagery BCI

    NASA Astrophysics Data System (ADS)

    Dagaev, Nikolay; Volkova, Ksenia; Ossadtchi, Alexei

    2018-02-01

    Objective. Brain-computer interface (BCI) systems are known to be vulnerable to variabilities in background states of a user. Usually, no detailed information on these states is available even during the training stage. Thus there is a need in a method which is capable of taking background states into account in an unsupervised way. Approach. We propose a latent variable method that is based on a probabilistic model with a discrete latent variable. In order to estimate the model’s parameters, we suggest to use the expectation maximization algorithm. The proposed method is aimed at assessing characteristics of background states without any corresponding data labeling. In the context of asynchronous motor imagery paradigm, we applied this method to the real data from twelve able-bodied subjects with open/closed eyes serving as background states. Main results. We found that the latent variable method improved classification of target states compared to the baseline method (in seven of twelve subjects). In addition, we found that our method was also capable of background states recognition (in six of twelve subjects). Significance. Without any supervised information on background states, the latent variable method provides a way to improve classification in BCI by taking background states into account at the training stage and then by making decisions on target states weighted by posterior probabilities of background states at the prediction stage.

  17. Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition

    PubMed Central

    Wang, Xin; Deng, Zhongliang

    2017-01-01

    In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. PMID:28677635

  18. The perception and identification of facial emotions in individuals with autism spectrum disorders using the Let's Face It! Emotion Skills Battery.

    PubMed

    Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin S; South, Mikle; McPartland, James C; Kaiser, Martha D; Schultz, Robert T

    2012-12-01

    Although impaired social-emotional ability is a hallmark of autism spectrum disorder (ASD), the perceptual skills and mediating strategies contributing to the social deficits of autism are not well understood. A perceptual skill that is fundamental to effective social communication is the ability to accurately perceive and interpret facial emotions. To evaluate the expression processing of participants with ASD, we designed the Let's Face It! Emotion Skills Battery (LFI! Battery), a computer-based assessment composed of three subscales measuring verbal and perceptual skills implicated in the recognition of facial emotions. We administered the LFI! Battery to groups of participants with ASD and typically developing control (TDC) participants that were matched for age and IQ. On the Name Game labeling task, participants with ASD (N = 68) performed on par with TDC individuals (N = 66) in their ability to name the facial emotions of happy, sad, disgust and surprise and were only impaired in their ability to identify the angry expression. On the Matchmaker Expression task that measures the recognition of facial emotions across different facial identities, the ASD participants (N = 66) performed reliably worse than TDC participants (N = 67) on the emotions of happy, sad, disgust, frighten and angry. In the Parts-Wholes test of perceptual strategies of expression, the TDC participants (N = 67) displayed more holistic encoding for the eyes than the mouths in expressive faces whereas ASD participants (N = 66) exhibited the reverse pattern of holistic recognition for the mouth and analytic recognition of the eyes. In summary, findings from the LFI! Battery show that participants with ASD were able to label the basic facial emotions (with the exception of angry expression) on par with age- and IQ-matched TDC participants. However, participants with ASD were impaired in their ability to generalize facial emotions across different identities and showed a tendency to recognize the mouth feature holistically and the eyes as isolated parts. © 2012 The Authors. Journal of Child Psychology and Psychiatry © 2012 Association for Child and Adolescent Mental Health.

  19. 20 CFR 408.1220 - How do we pay Federally administered State recognition payments?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1220.... SSA will not administer State recognition payments in amounts less than $1 per month. Hence, recognition payment amounts of less than $1 will be raised to a dollar. ...

  20. 20 CFR 408.1220 - How do we pay Federally administered State recognition payments?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1220.... SSA will not administer State recognition payments in amounts less than $1 per month. Hence, recognition payment amounts of less than $1 will be raised to a dollar. ...

  1. 20 CFR 408.1220 - How do we pay Federally administered State recognition payments?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1220.... SSA will not administer State recognition payments in amounts less than $1 per month. Hence, recognition payment amounts of less than $1 will be raised to a dollar. ...

  2. 20 CFR 408.1220 - How do we pay Federally administered State recognition payments?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1220.... SSA will not administer State recognition payments in amounts less than $1 per month. Hence, recognition payment amounts of less than $1 will be raised to a dollar. ...

  3. 20 CFR 408.1220 - How do we pay Federally administered State recognition payments?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1220.... SSA will not administer State recognition payments in amounts less than $1 per month. Hence, recognition payment amounts of less than $1 will be raised to a dollar. ...

  4. [Neural mechanisms of facial recognition].

    PubMed

    Nagai, Chiyoko

    2007-01-01

    We review recent researches in neural mechanisms of facial recognition in the light of three aspects: facial discrimination and identification, recognition of facial expressions, and face perception in itself. First, it has been demonstrated that the fusiform gyrus has a main role of facial discrimination and identification. However, whether the FFA (fusiform face area) is really a special area for facial processing or not is controversial; some researchers insist that the FFA is related to 'becoming an expert' for some kinds of visual objects, including faces. Neural mechanisms of prosopagnosia would be deeply concerned to this issue. Second, the amygdala seems to be very concerned to recognition of facial expressions, especially fear. The amygdala, connected with the superior temporal sulcus and the orbitofrontal cortex, appears to operate the cortical function. The amygdala and the superior temporal sulcus are related to gaze recognition, which explains why a patient with bilateral amygdala damage could not recognize only a fear expression; the information from eyes is necessary for fear recognition. Finally, even a newborn infant can recognize a face as a face, which is congruent with the innate hypothesis of facial recognition. Some researchers speculate that the neural basis of such face perception is the subcortical network, comprised of the amygdala, the superior colliculus, and the pulvinar. This network would relate to covert recognition that prosopagnosic patients have.

  5. From state dissociation to status dissociatus.

    PubMed

    Antelmi, Elena; Ferri, Raffaele; Iranzo, Alex; Arnulf, Isabelle; Dauvilliers, Yves; Bhatia, Kailash P; Liguori, Rocco; Schenck, Carlos H; Plazzi, Giuseppe

    2016-08-01

    The states of being are conventionally defined by the simultaneous occurrence of behavioral, neurophysiological and autonomic descriptors. State dissociation disorders are due to the intrusion of features typical of a different state into an ongoing state. Disorders related to these conditions are classified according to the ongoing main state and comprise: 1) Dissociation from prevailing wakefulness as seen in hypnagogic or hypnopompic hallucinations, automatic behaviors, sleep drunkenness, cataplexy and sleep paralysis 2) Dissociation from rapid eye movement (REM) sleep as seen in REM sleep behavior disorder and lucid dreaming and 3) Dissociation from NREM sleep as seen in the disorders of arousal. The extreme expression of states dissociation is characterized by the asynchronous occurrence of the various components of the different states that prevents the recognition of any state of being. This condition has been named status dissociatus. According to the underlying disorders/diseases and to their severity, among status dissociatus we may recognize disorders in which such an extreme dissociation occurs only at night time or intermittently (i.e., autoimmune encephalopathies, narcolepsy type 1 and IgLON5 parasomnia), and others in which it occurs nearly continuously with complete loss of any conventionally defined state of being, and of the circadian pattern (agrypnia excitata). Here, we render a comprehensive review of all diseases/disorders associated with state dissociation and status dissociatus and propose a critical classification of this complex scenario. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor: Sensing ability, TD-DFT calculations and its application as an efficient solid state sensor.

    PubMed

    Lan, Linxin; Li, Tianduo; Wei, Tao; Pang, He; Sun, Tao; Wang, Enhua; Liu, Haixia; Niu, Qingfen

    2018-03-15

    An oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor 3 T-2CN was reported. Sensor 3 T-2CN showed both naked-eye recognition and ratiometric fluorescence response for CN - with an excellent selectivity and high sensitivity. The sensing mechanism based on the nucleophilic attack of CN - on the vinyl CC bond has been successfully confirmed by the optical measurements, 1 H NMR titration, FT-IR spectra as well as the DFT/TD-DFT calculations. Moreover, the detection limit was calculated to be 0.19μM, which is much lower than the maximum permission concentration in drinking water (1.9μM). Importantly, test strips (filter paper and TLC plates) containing 3 T-2CN were fabricated, which could act as a practical and efficient solid state optical sensor for CN - in field measurements. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. MR-Compatible Integrated Eye Tracking System

    DTIC Science & Technology

    2016-03-10

    SECURITY CLASSIFICATION OF: This instrumentation grant was used to purchase state-of-the-art, high-resolution video eye tracker that can be used to...P.O. Box 12211 Research Triangle Park, NC 27709-2211 video eye tracking, eye movments, visual search; camouflage-breaking REPORT DOCUMENTATION PAGE...Report: MR-Compatible Integrated Eye Tracking System Report Title This instrumentation grant was used to purchase state-of-the-art, high-resolution video

  8. Feasibility of utilizing a commercial eye tracker to assess electronic health record use during patient simulation.

    PubMed

    Gold, Jeffrey Allen; Stephenson, Laurel E; Gorsuch, Adriel; Parthasarathy, Keshav; Mohan, Vishnu

    2016-09-01

    Numerous reports describe unintended consequences of electronic health record implementation. Having previously described physicians' failures to recognize patient safety issues within our electronic health record simulation environment, we now report on our use of eye and screen-tracking technology to understand factors associated with poor error recognition during an intensive care unit-based electronic health record simulation. We linked performance on the simulation to standard eye and screen-tracking readouts including number of fixations, saccades, mouse clicks and screens visited. In addition, we developed an overall Composite Eye Tracking score which measured when, where and how often each safety item was viewed. For 39 participants, the Composite Eye Tracking score correlated with performance on the simulation (p = 0.004). Overall, the improved performance was associated with a pattern of rapid scanning of data manifested by increased number of screens visited (p = 0.001), mouse clicks (p = 0.03) and saccades (p = 0.004). Eye tracking can be successfully integrated into electronic health record-based simulation and provides a surrogate measure of cognitive decision making and electronic health record usability. © The Author(s) 2015.

  9. The "Eye Avoidance" Hypothesis of Autism Face Processing.

    PubMed

    Tanaka, James W; Sung, Andrew

    2016-05-01

    Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: (1) the holistic hypothesis, (2) the local perceptual bias hypothesis and (3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a increased physiological response as indicated by heightened skin conductance and amygdala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, exacerbating the social challenges for persons with ASD.

  10. The Role of Face Familiarity in Eye Tracking of Faces by Individuals with Autism Spectrum Disorders

    PubMed Central

    Dawson, Geraldine; Webb, Sara; Murias, Michael; Munson, Jeffrey; Panagiotides, Heracles; Aylward, Elizabeth

    2010-01-01

    It has been shown that individuals with autism spectrum disorders (ASD) demonstrate normal activation in the fusiform gyrus when viewing familiar, but not unfamiliar faces. The current study utilized eye tracking to investigate patterns of attention underlying familiar versus unfamiliar face processing in ASD. Eye movements of 18 typically developing participants and 17 individuals with ASD were recorded while passively viewing three face categories: unfamiliar non-repeating faces, a repeating highly familiar face, and a repeating previously unfamiliar face. Results suggest that individuals with ASD do not exhibit more normative gaze patterns when viewing familiar faces. A second task assessed facial recognition accuracy and response time for familiar and novel faces. The groups did not differ on accuracy or reaction times. PMID:18306030

  11. The “eye avoidance” hypothesis of autism face processing

    PubMed Central

    Tanaka, James W.; Sung, Andrew

    2013-01-01

    Although a growing body of research indicates that children with autism spectrum disorder (ASD) exhibit selective deficits in their ability to recognize facial identities and expressions, the source of their face impairment is, as yet, undetermined. In this paper, we consider three possible accounts of the autism face deficit: 1) the holistic hypothesis, 2) the local perceptual bias hypothesis and 3) the eye avoidance hypothesis. A review of the literature indicates that contrary to the holistic hypothesis, there is little evidence to suggest that individuals with autism do not perceive faces holistically. The local perceptual bias account also fails to explain the selective advantage that ASD individuals demonstrate for objects and their selective disadvantage for faces. The eye avoidance hypothesis provides a plausible explanation of face recognition deficits where individuals with ASD avoid the eye region because it is perceived as socially threatening. Direct eye contact elicits a heightened physiological response as indicated by heightened skin conductance and increased amgydala activity. For individuals with autism, avoiding the eyes is an adaptive strategy, however, this approach interferes with the ability to process facial cues of identity, expressions and intentions, The “eye avoidance” strategy has negative effects on the ability to decode facial information about identity, expression, and intentions, exacerbating the social challenges for persons with ASD. PMID:24150885

  12. Eye center localization and gaze gesture recognition for human-computer interaction.

    PubMed

    Zhang, Wenhao; Smith, Melvyn L; Smith, Lyndon N; Farooq, Abdul

    2016-03-01

    This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.

  13. Candidate Socioemotional Remediation Program for Individuals with Intellectual Disability

    ERIC Educational Resources Information Center

    Glaser, Bronwyn; Lothe, Amelie; Chabloz, Melanie; Dukes, Daniel; Pasca, Catherine; Redoute, Jerome; Eliez, Stephan

    2012-01-01

    The authors developed a computerized program, Vis-a-Vis (VAV), to improve socioemotional functioning and working memory in children with developmental disabilities. The authors subsequently tested whether participants showed signs of improving the targeted skills. VAV is composed of three modules: Focus on the Eyes, Emotion Recognition and…

  14. Cultural Competence and School Counselor Training: A Collective Case Study

    ERIC Educational Resources Information Center

    Nelson, Judith A.; Bustamante, Rebecca; Sawyer, Cheryl; Sloan, Eva D.

    2015-01-01

    This collective case study investigated the experiences of bilingual counselors-in-training who assessed school-wide cultural competence in public schools. Analysis and interpretation of data resulted in the identification of 5 themes: eye-opening experiences, recognition of strengths, the role of school leaders, road maps for change, and…

  15. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  16. Identifying People with Soft-Biometrics at Fleet Week

    DTIC Science & Technology

    2013-03-01

    onboard sensors. This included:  Color Camera: Located in the right eye, Octavia stored 640x480 RGB images at ~4 Hz from a Point Grey Firefly camera. A...Face Detection The Fleet Week experiments demonstrated the potential of soft biometrics for recognition, but all of the existing algorithms currently

  17. Distributional Effects of Word Frequency on Eye Fixation Durations

    ERIC Educational Resources Information Center

    Staub, Adrian; White, Sarah J.; Drieghe, Denis; Hollway, Elizabeth C.; Rayner, Keith

    2010-01-01

    Recent research using word recognition paradigms, such as lexical decision and speeded pronunciation, has investigated how a range of variables affect the location and shape of response time distributions, using both parametric and non-parametric techniques. In this article, we explore the distributional effects of a word frequency manipulation on…

  18. Individual Differences in Inhibitory Control Relate to Bilingual Spoken Word Processing

    ERIC Educational Resources Information Center

    Mercier, Julie; Pivneva, Irina; Titone, Debra

    2014-01-01

    We investigated whether individual differences in inhibitory control relate to bilingual spoken word recognition. While their eye movements were monitored, native English and native French English-French bilinguals listened to English words (e.g., "field") and looked at pictures corresponding to the target, a within-language competitor…

  19. The selective disruption of spatial working memory by eye movements

    PubMed Central

    Postle, Bradley R.; Idzikowski, Christopher; Sala, Sergio Della; Logie, Robert H.; Baddeley, Alan D.

    2005-01-01

    In the late 1970s/early 1980s, Baddeley and colleagues conducted a series of experiments investigating the role of eye movements in visual working memory. Although only described briefly in a book (Baddeley, 1986), these studies have influenced a remarkable number of empirical and theoretical developments in fields ranging from experimental psychology to human neuropsychology to nonhuman primate electrophysiology. This paper presents, in full detail, three critical studies from this series, together with a recently performed study that includes a level of eye movement measurement and control that was not available for the older studies. Together, the results demonstrate several facts about the sensitivity of visuospatial working memory to eye movements. First, it is eye movement control, not movement per se, that produces the disruptive effects. Second, these effects are limited to working memory for locations, and do not generalize to visual working memory for shapes. Third, they can be isolated to the storage/maintenance components of working memory (e.g., to the delay period of the delayed-recognition task). These facts have important implications for models of visual working memory. PMID:16556561

  20. Controlling a human-computer interface system with a novel classification method that uses electrooculography signals.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng

    2013-08-01

    Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.

  1. The effect of emotionally valenced eye region images on visuocortical processing of surprised faces.

    PubMed

    Li, Shuaixia; Li, Ping; Wang, Wei; Zhu, Xiangru; Luo, Wenbo

    2018-05-01

    In this study, we presented pictorial representations of happy, neutral, and fearful expressions projected in the eye regions to determine whether the eye region alone is sufficient to produce a context effect. Participants were asked to judge the valence of surprised faces that had been preceded by a picture of an eye region. Behavioral results showed that affective ratings of surprised faces were context dependent. Prime-related ERPs with presentation of happy eyes elicited a larger P1 than those for neutral and fearful eyes, likely due to the recognition advantage provided by a happy expression. Target-related ERPs showed that surprised faces in the context of fearful and happy eyes elicited dramatically larger C1 than those in the neutral context, which reflected the modulation by predictions during the earliest stages of face processing. There were larger N170 with neutral and fearful eye contexts compared to the happy context, suggesting faces were being integrated with contextual threat information. The P3 component exhibited enhanced brain activity in response to faces preceded by happy and fearful eyes compared with neutral eyes, indicating motivated attention processing may be involved at this stage. Altogether, these results indicate for the first time that the influence of isolated eye regions on the perception of surprised faces involves preferential processing at the early stages and elaborate processing at the late stages. Moreover, higher cognitive processes such as predictions and attention can modulate face processing from the earliest stages in a top-down manner. © 2017 Society for Psychophysiological Research.

  2. It Takes Two–Skilled Recognition of Objects Engages Lateral Areas in Both Hemispheres

    PubMed Central

    Bilalić, Merim; Kiesel, Andrea; Pohl, Carsten; Erb, Michael; Grodd, Wolfgang

    2011-01-01

    Our object recognition abilities, a direct product of our experience with objects, are fine-tuned to perfection. Left temporal and lateral areas along the dorsal, action related stream, as well as left infero-temporal areas along the ventral, object related stream are engaged in object recognition. Here we show that expertise modulates the activity of dorsal areas in the recognition of man-made objects with clearly specified functions. Expert chess players were faster than chess novices in identifying chess objects and their functional relations. Experts' advantage was domain-specific as there were no differences between groups in a control task featuring geometrical shapes. The pattern of eye movements supported the notion that experts' extensive knowledge about domain objects and their functions enabled superior recognition even when experts were not directly fixating the objects of interest. Functional magnetic resonance imaging (fMRI) related exclusively the areas along the dorsal stream to chess specific object recognition. Besides the commonly involved left temporal and parietal lateral brain areas, we found that only in experts homologous areas on the right hemisphere were also engaged in chess specific object recognition. Based on these results, we discuss whether skilled object recognition does not only involve a more efficient version of the processes found in non-skilled recognition, but also qualitatively different cognitive processes which engage additional brain areas. PMID:21283683

  3. Accommodation and the Visual Regulation of Refractive State in Marmosets

    PubMed Central

    Troilo, David; Totonelly, Kristen; Harb, Elise

    2009-01-01

    Purpose To determine the effects of imposed anisometropic retinal defocus on accommodation, ocular growth, and refractive state changes in marmosets. Methods Marmosets were raised with extended-wear soft contact lenses for an average duration of 10 wks beginning at an average age of 76 d. Experimental animals wore either a positive or negative contact lens over one eye and a plano lens or no lens over the other. Another group wore binocular lenses of equal magnitude but opposite sign. Untreated marmosets served as controls and three wore plano lenses monocularly. Cycloplegic refractive state, corneal curvature, and vitreous chamber depth were measured before, during, and after the period of lens wear. To investigate the accommodative response, the effective refractive state was measured through each anisometropic condition at varying accommodative stimuli positions using an infrared refractometer. Results Eye growth and refractive state are significantly correlated with the sign and power of the contact lens worn. The eyes of marmosets reared with monocular negative power lenses had longer vitreous chambers and were myopic relative to contralateral control eyes (p<0.01). Monocular positive power lenses produced a significant reduction in vitreous chamber depth and hyperopia relative to the contralateral control eyes (p<0.05). In marmosets reared binocularly with lenses of opposite sign, we found larger interocular differences in vitreous chamber depths and refractive state (p<0.001). Accommodation influences the defocus experienced through the lenses, however, the mean effective refractive state was still hyperopia in the negative-lens-treated eyes and myopia in the positive-lens-treated eyes. Conclusions Imposed anisometropia effectively alters marmoset eye growth and refractive state to compensate for the imposed defocus. The response to imposed hyperopia is larger and faster than the response to imposed myopia. The pattern of accommodation under imposed anisometropia produces effective refractive states that are consistent with the changes in eye growth and refractive state observed. PMID:19104464

  4. Are adolescents with anorexia nervosa better at reading minds?

    PubMed

    Laghi, Fiorenzo; Pompili, Sara; Zanna, Valeria; Castiglioni, Maria Chiara; Criscuolo, Michela; Chianello, Ilenia; Baumgartner, Emma; Baiocco, Roberto

    2015-01-01

    The present study aimed to investigate mindreading abilities in female adolescent patients with AN compared to healthy controls (HCs), analysing differences for emotional valence of facial stimuli. The Eating Disorder Inventory) for evaluating psychological traits associated with eating disorders and the Children's version of the Reading the Mind in the Eyes Test for evaluating mindreading abilities were administered to 40 Italian female patients (mean age = 14.93; SD = 1.48) with restrictive diagnosis of anorexia nervosa (AN) and 40 healthy females (mean age = 14.88; SD = 0.56). No significant differences between the AN group and HCs for the Eyes Total score were found. Even when analysing emotional valence of the items, the two groups were equally successful in the facial recognition of positive, negative and neutral emotions. A significant difference was revealed for the percentage of correct responses of item 10 and item 15, where the AN group was less able to correctly identify the target descriptor (Not believing) over the foils than HCs. A significant difference was revealed in discriminating for affective emotions versus cognitive states; only for affective but not for cognitive states, patients with AN were found to perform better than controls on the mindreading task. Our study highlighted the importance of analysing and discriminating for different valences of facial stimuli when assessing mindreading abilities in adolescents with AN, so that more precise and specific treatment approaches could be developed for female adolescents with AN.

  5. Effect of UV-A and UV-B irradiation on the metabolic profile of aqueous humor in rabbits analyzed by 1H NMR spectroscopy.

    PubMed

    Tessem, May-Britt; Bathen, Tone F; Cejková, Jitka; Midelfart, Anna

    2005-03-01

    This study was conducted to investigate metabolic changes in aqueous humor from rabbit eyes exposed to either UV-A or -B radiation, by using (1)H nuclear magnetic resonance (NMR) spectroscopy and unsupervised pattern recognition methods. Both eyes of adult albino rabbits were irradiated with UV-A (366 nm, 0.589 J/cm(2)) or UV-B (312 nm, 1.667 J/cm(2)) radiation for 8 minutes, once a day for 5 days. Three days after the last irradiation, samples of aqueous humor were aspirated, and the metabolic profiles analyzed with (1)H NMR spectroscopy. The metabolic concentrations in the exposed and control materials were statistically analyzed and compared, with multivariate methods and one-way ANOVA. UV-B radiation caused statistically significant alterations of betaine, glucose, ascorbate, valine, isoleucine, and formate in the rabbit aqueous humor. By using principal component analysis, the UV-B-irradiated samples were clearly separated from the UV-A-irradiated samples and the control group. No significant metabolic changes were detected in UV-A-irradiated samples. This study demonstrates the potential of using unsupervised pattern recognition methods to extract valuable metabolic information from complex (1)H NMR spectra. UV-B irradiation of rabbit eyes led to significant metabolic changes in the aqueous humor detected 3 days after the last exposure.

  6. Readability of product ingredient labels can be improved by simple means: an experimental study.

    PubMed

    Yazar, Kerem; Seimyr, Gustaf Ö; Novak, Jiri A; White, Ian R; Lidén, Carola

    2014-10-01

    Ingredient labels on products used by consumers and workers every day, such as food, cosmetics, and detergents, can be difficult to read and understand. To assess whether typographical design and ordering of ingredients can improve the readability of product ingredient labels. The study subjects (n = 16) had to search for two target ingredients in 30 cosmetic product labels and three alternative formats of each. Outcome measures were completion time (reading speed), recognition rate, eye movements, task load and subjective rating when the reading of ingredient labels was assessed by video recording, an eye tracking device, and questionnaires. The completion time was significantly lower (p < 0.001) when subjects were reading all alternative formats than when they were reading the original. The recognition rate was generally high, and improved slightly with the alternative formats. The eye movement measures confirmed that the alternative formats were easier to read than the original product labels. Mental and physical demand and effort were significantly lower (p < 0.036) and experience rating was higher (p < 0.042) for the alternative formats. There were also differences between the alternative formats. Simple adjustments in the design of product ingredient labels would significantly improve their readability, benefiting the many allergic individuals and others in their daily struggle to avoid harmful or unwanted exposure. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. More than meets the eye: the role of self-identity in decoding complex emotional states.

    PubMed

    Stevenson, Michael T; Soto, José A; Adams, Reginald B

    2012-10-01

    Folk wisdom asserts that "the eyes are the window to the soul," and empirical science corroborates a prominent role for the eyes in the communication of emotion. Herein we examine variation in the ability to "read" the eyes of others as a function of social group membership, employing a widely used emotional state decoding task: "Reading the Mind in Eyes." This task has documented impaired emotional state decoding across racial groups, with cross-race performance on par with that previously reported as a function of autism spectrum disorders. The present study extended this work by examining the moderating role of social identity in such impairments. For college students more highly identified with their university, cross-race performance differences were not found for judgments of "same-school" eyes but remained for "rival-school" eyes. These findings suggest that impaired emotional state decoding across groups may thus be more amenable to remediation than previously realized.

  8. Synthesis and anion recognition studies of novel bis (4-hydroxycoumarin) methane azo dyes

    NASA Astrophysics Data System (ADS)

    Panitsiri, Amorn; Tongkhan, Sukanya; Radchatawedchakoon, Widchaya; Sakee, Uthai

    2016-03-01

    Four new bis (4-hydroxycoumarin) methane azo dyes were synthesized by the condensation of 4-hydroxycoumarin with four different azo salicylaldehydes and their structures were characterized by FT-IR, 1H NMR, 13C NMR, HRMS. Anion binding ability in dimethyl sulfoxide (DMSO) solutions with tetrabutylammonium (TBA) salts (F-, Cl-, Br-, I-, AcO- and H2PO4-) was investigated by the naked eye, as well as UV-visible spectroscopy. The sensor shows selective recognition towards fluoride and acetate. The binding affinity of the sensors with fluoride and acetate was calculated using UV-visible spectroscopic technique.

  9. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  10. Pyrazolone as a recognition site: Rhodamine 6G-based fluorescent probe for the selective recognition of Fe3+ in acetonitrile-aqueous solution.

    PubMed

    Parihar, Sanjay; Boricha, Vinod P; Jadeja, R N

    2015-03-01

    Two novel Rhodamine-pyrazolone-based colorimetric off-on fluorescent chemosensors for Fe(3+) ions were designed and synthesized using pyrazolone as the recognition moiety and Rhodamine 6G as the signalling moiety. The photophysical properties and Fe(3+) -binding properties of sensors L(1) and L(2) in acetonitrile-aqueous solution were also investigated. Both sensors successfully exhibit a remarkably 'turn-on' response, toward Fe(3+) , which was attributed to 1: 2 complex formation between Fe(3+) and L(1) /L(2) . The fluorescent and colorimetric response to Fe(3+) can be detected by the naked eye, which provides a facile method for the visual detection of Fe(3+) . Copyright © 2014 John Wiley & Sons, Ltd.

  11. Implicit prosody mining based on the human eye image capture technology

    NASA Astrophysics Data System (ADS)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.

  12. An eye movement corpus study of the age-of-acquisition effect.

    PubMed

    Dirix, Nicolas; Duyck, Wouter

    2017-12-01

    In the present study, we investigated the effects of word-level age of acquisition (AoA) on natural reading. Previous studies, using multiple language modalities, showed that earlier-learned words are recognized, read, spoken, and responded to faster than words learned later in life. Until now, in visual word recognition the experimental materials were limited to single-word or sentence studies. We analyzed the data of the Ghent Eye-tracking Corpus (GECO; Cop, Dirix, Drieghe, & Duyck, in press), an eyetracking corpus of participants reading an entire novel, resulting in the first eye movement megastudy of AoA effects in natural reading. We found that the ages at which specific words were learned indeed influenced reading times, above other important (correlated) lexical variables, such as word frequency and length. Shorter fixations for earlier-learned words were consistently found throughout the reading process, in both early (single-fixation durations, first-fixation durations, gaze durations) and late (total reading times) measures. Implications for theoretical accounts of AoA effects and eye movements are discussed.

  13. Portrait of an Asian stalk-eyed fly

    NASA Astrophysics Data System (ADS)

    de La Motte, Ingrid; Burkhardt, Dietrich

    1983-09-01

    Diopsid flies have eyes set on stalks which are in some cases so long that the distance between the eyes exceeds the body length. These conspicuous structures have given rise to much speculation about their adaptive value, but there are very few actual observations by which to judge these hypotheses. Cyrtodiopsis whitei Curran lives in the tropical rainforest of Malaysia. We describe a number of aspects of its morphology and biology, some functional properties of the eye, and the ritualized fights between males, by which harems are acquired. The evolutionary significance of the eyestalks is discussed: they represent structures subjected to a double selection pressure; they are an adaptation by which a sensory system is better matched to the special problems encountered in a densely structured habitat (in that the field of view is extended and the ability to estimate distance and size and to identify objects at a large distance is improved), also they act as key stimulus for species recognition and as releaser for intraspecific behaviour.

  14. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  15. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  16. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    PubMed

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. 20 CFR 408.1235 - How does the State transfer funds to SSA to administer its recognition payment program?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... administer its recognition payment program? 408.1235 Section 408.1235 Employees' Benefits SOCIAL SECURITY... United States Department of the Treasury. (c) State audit. Any State entering into an agreement with SSA which provides for Federal administration of the State's recognition payments has the right to an audit...

  18. 20 CFR 408.1235 - How does the State transfer funds to SSA to administer its recognition payment program?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... administer its recognition payment program? 408.1235 Section 408.1235 Employees' Benefits SOCIAL SECURITY... United States Department of the Treasury. (c) State audit. Any State entering into an agreement with SSA which provides for Federal administration of the State's recognition payments has the right to an audit...

  19. The Relationship between Child Maltreatment and Emotion Recognition

    PubMed Central

    Koizumi, Michiko; Takagishi, Haruto

    2014-01-01

    Child abuse and neglect affect the development of social cognition in children and inhibit social adjustment. The purpose of this study was to compare the ability to identify the emotional states of others between abused and non-abused children. The participants, 129 children (44 abused and 85 non-abused children), completed a children’s version of the Reading the Mind in the Eyes Test (RMET). Results showed that the mean accuracy rate on the RMET for abused children was significantly lower than the rate of the non-abused children. In addition, the accuracy rates for positive emotion items (e.g., hoping, interested, happy) were significantly lower for the abused children, but negative emotion and neutral items were not different across the groups. This study found a negative relationship between child abuse and the ability to understand others’ emotions, especially positive emotions. PMID:24465891

  20. Humor in the eye tracker: attention capture and distraction from context cues.

    PubMed

    Strick, Madelijn; Holland, Rob W; Van Baaren, Rick; Van Knippenberg, Ad

    2010-01-01

    The humor effect refers to a robust finding in memory research that humorous information is easily recalled, at the expense of recall of nonhumorous information that was encoded in close temporal proximity. Previous research suggests that memory retrieval processes underlie this effect. That is, free recall is biased toward humorous information, which interferes with the retrieval of nonhumorous information. The present research tested an additional explanation that has not been specifically addressed before: Humor receives enhanced attention during information encoding, which decreases attention for context information. Participants observed humorous, nonhumorous positive, and nonhumorous neutral texts paired with novel consumer brands, while their eye movements were recorded using eye-tracker technology. The results confirmed that humor receives prolonged attention relative to both positive and neutral nonhumorous information. This enhanced attention correlated with impaired brand recognition.

  1. Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition?

    PubMed

    Estudillo, Alejandro J; Bindemann, Markus

    2017-05-01

    This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed.

  2. Shy Children Are Less Sensitive to Some Cues to Facial Recognition

    ERIC Educational Resources Information Center

    Brunet, Paul M.; Mondloch, Catherine J.; Schmidt, Louis A.

    2010-01-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about…

  3. Individual Differences in Language Ability Are Related to Variation in Word Recognition, Not Speech Perception: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2014-01-01

    Purpose: The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Method: Adolescents with a range of language abilities (N = 74, including…

  4. Processing Trade-Offs in the Reading of Dutch Derived Words

    ERIC Educational Resources Information Center

    Kuperman, Victor; Bertram, Raymond; Baayen, R. Harald

    2010-01-01

    This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., "plaats+ing" "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter…

  5. Relationships between Lexical Processing Speed, Language Skills, and Autistic Traits in Children

    ERIC Educational Resources Information Center

    Abrigo, Erin

    2012-01-01

    According to current models of spoken word recognition listeners understand speech as it unfolds over time. Eye tracking provides a non-invasive, on-line method to monitor attention, providing insight into the processing of spoken language. In the current project a spoken lexical processing assessment (LPA) confirmed current theories of spoken…

  6. Semantic Size and Contextual Congruency Effects during Reading: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    Wei, Wei; Cook, Anne E.

    2016-01-01

    Recent lexical decision studies have produced conflicting evidence about whether an object's semantic size influences word recognition. The present study examined this variable in online reading. Target words representing small and large objects were embedded in sentence contexts that were either neutral, congruent, or incongruent with respect to…

  7. The Role of Executive Control of Attention and Selective Encoding for Preschoolers' Learning

    ERIC Educational Resources Information Center

    Roderer, Thomas; Krebs, Saskia; Schmid, Corinne; Roebers, Claudia M.

    2012-01-01

    Selectivity in encoding, aspects of attentional control and their contribution to learning performance were explored in a sample of preschoolers. While the children are performing a learning task, their encoding of relevant and attention towards irrelevant information was recorded through an eye-tracking device. Recognition of target items was…

  8. Evolution of reticular pseudodrusen.

    PubMed

    Sarks, John; Arnold, Jennifer; Ho, I-Van; Sarks, Shirley; Killingsworth, Murray

    2011-07-01

    To report observations relating to the clinical recognition and possible basis of reticular pseudodrusen (RPD). This retrospective study reports the evolution of RPD in 166 patients who had follow-up of over 1 year using multiple imaging techniques. Mean age when first seen was 73.3 years and the mean period of observation was 4.9 years (range 1-18 years). Associated macular changes were recorded. RPD were first identified in the upper fundus as a reticular network, which then became less obvious, developing a diffuse yellowish appearance. RPD also faded around choroidal neovascularisation (CNV). RPD therefore could be transient but the pattern often remained visible outside the macula or nasal to the discs. Manifestations of age-related macular degeneration (AMD) were present in nearly all eyes and there was a particularly high association with CNV (52.1%). In one clinicopathological case abnormal material was found in the subretinal space. The prevalence of RPD may be underestimated because their recognition depends upon the imaging method used, the area of fundus examined and the confusion with typical drusen. The pathology of one eye suggests that RPD may correspond to material in the subretinal space.

  9. Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall

    PubMed Central

    Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat

    2014-01-01

    Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085

  10. A combination of green tea extract and l-theanine improves memory and attention in subjects with mild cognitive impairment: a double-blind placebo-controlled study.

    PubMed

    Park, Sang-Ki; Jung, In-Chul; Lee, Won Kyung; Lee, Young Sun; Park, Hyoung Kook; Go, Hyo Jin; Kim, Kiseong; Lim, Nam Kyoo; Hong, Jin Tae; Ly, Sun Yung; Rho, Seok Seon

    2011-04-01

    A combination of green tea extract and l-theanine (LGNC-07) has been reported to have beneficial effects on cognition in animal studies. In this randomized, double-blind, placebo-controlled study, the effect of LGNC-07 on memory and attention in subjects with mild cognitive impairment (MCI) was investigated. Ninety-one MCI subjects whose Mini Mental State Examination-K (MMSE-K) scores were between 21 and 26 and who were in either stage 2 or 3 on the Global Deterioration Scale were enrolled in this study. The treatment group (13 men, 32 women; 57.58 ± 9.45 years) took 1,680 mg of LGNC-07, and the placebo group (12 men, 34 women; 56.28 ± 9.92 years) received an equivalent amount of maltodextrin and lactose for 16 weeks. Neuropsychological tests (Rey-Kim memory test and Stroop color-word test) and electroencephalography were conducted to evaluate the effect of LGNC-07 on memory and attention. Further analyses were stratified by baseline severity to evaluate treatment response on the degree of impairment (MMSE-K 21-23 and 24-26). LGNC-07 led to improvements in memory by marginally increasing delayed recognition in the Rey-Kim memory test (P = .0572). Stratified analyses showed that LGNC-07 improved memory and selective attention by significantly increasing the Rey-Kim memory quotient and word reading in the subjects with MMSE-K scores of 21-23 (LGNC-07, n = 11; placebo, n = 9). Electroencephalograms were recorded in 24 randomly selected subjects hourly for 3 hours in eye-open, eye-closed, and reading states after a single dose of LGNC-07 (LGNC-07, n = 12; placebo, n = 12). Brain theta waves, an indicator of cognitive alertness, were increased significantly in the temporal, frontal, parietal, and occipital areas after 3 hours in the eye-open and reading states. Therefore, this study suggests that LGNC-07 has potential as an intervention for cognitive improvement.

  11. Rethinking dry eye disease: a perspective on clinical implications.

    PubMed

    Bron, Anthony J; Tomlinson, Alan; Foulks, Gary N; Pepose, Jay S; Baudouin, Christophe; Geerling, Gerd; Nichols, Kelly K; Lemp, Michael A

    2014-04-01

    Publication of the DEWS report in 2007 established the state of the science of dry eye disease (DED). Since that time, new evidence suggests that a rethinking of traditional concepts of dry eye disease is in order. Specifically, new evidence on the epidemiology of the disease, as well as strategies for diagnosis, have changed the understanding of DED, which is a heterogeneous disease associated with considerable variability in presentation. These advances, along with implications for clinical care, are summarized herein. The most widely used signs of DED are poorly correlated with each other and with symptoms. While symptoms are thought to be characteristic of DED, recent studies have shown that less than 60% of subjects with other objective evidence of DED are symptomatic. Thus the use of symptoms alone in diagnosis will likely result in missing a significant percentage of DED patients, particularly with early/mild disease. This could have considerable impact in patients undergoing cataract or refractive surgery as patients with DED have less than optimal visual results. The most widely used objective signs for diagnosing DED all show greater variability between eyes and in the same eye over time compared with normal subjects. This variability is thought to be a manifestation of tear film instability which results in rapid breakup of the tearfilm between blinks and is an identifier of patients with DED. This feature emphasizes the bilateral nature of the disease in most subjects not suffering from unilateral lid or other unilateral destabilizing surface disorders. Instability of the composition of the tears also occurs in dry eye disease and shows the same variance between eyes. Finally, elevated tear osmolarity has been reported to be a global marker (present in both subtypes of the disease- aqueous-deficient dry eye and evaporative dry eye). Clinically, osmolarity has been shown to be the best single metric for diagnosis of DED and is directly related to increasing severity of disease. Clinical examination and other assessments differentiate which subtype of disease is present. With effective treatment, the tear osmolarity returns to normal, and its variability between eyes and with time disappears. Other promising markers include objective measures of visual deficits, proinflammatory molecular markers and other molecular markers, specific to each disease subtype, and panels of tear proteins. As yet, however, no single protein or panel of markers has been shown to discriminate between the major forms of DED. With the advent of new tests and technology, improved endpoints for clinical trials may be established, which in turn may allow new therapeutic agents to emerge in the foreseeable future. Accurate recognition of disease is now possible and successful management of DED appears to be within our grasp, for a majority of our patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Adult Day Health Care in State Homes § 52.20 Application for recognition based on certification. To apply for recognition and certification of a State home for adult day health care, a State...

  13. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Adult Day Health Care in State Homes § 52.20 Application for recognition based on certification. To apply for recognition and certification of a State home for adult day health care, a State...

  14. 38 CFR 52.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR ADULT DAY HEALTH CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Adult Day Health Care in State Homes § 52.20 Application for recognition based on certification. To apply for recognition and certification of a State home for adult day health care, a State...

  15. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send a...

  16. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send a...

  17. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send a...

  18. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send a...

  19. 38 CFR 51.20 - Application for recognition based on certification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... VETERANS AFFAIRS (CONTINUED) PER DIEM FOR NURSING HOME CARE OF VETERANS IN STATE HOMES Obtaining Per Diem for Nursing Home Care in State Homes § 51.20 Application for recognition based on certification. To apply for recognition and certification of a State home for nursing home care, a State must: (a) Send a...

  20. The memory state heuristic: A formal model based on repeated recognition judgments.

    PubMed

    Castela, Marta; Erdfelder, Edgar

    2017-02-01

    The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e., recognition certainty, uncertainty, or rejection certainty). Specifically, the larger the discrepancy between memory states, the larger the probability of choosing the object in the higher state. The typical RH paradigm does not allow estimation of the underlying memory states because it is unknown whether the objects were previously experienced or not. Therefore, we extended the paradigm by repeating the recognition task twice. In line with high threshold models of recognition, we assumed that inconsistent recognition judgments result from uncertainty whereas consistent judgments most likely result from memory certainty. In Experiment 1, we fitted 2 nested multinomial models to the data: an MSH model that formalizes the relation between memory states and binary choices explicitly and an approximate model that ignores the (unlikely) possibility of consistent guesses. Both models provided converging results. As predicted, reliance on recognition increased with the discrepancy in the underlying memory states. In Experiment 2, we replicated these results and found support for choice consistency predictions of the MSH. Additionally, recognition and choice latencies were in agreement with the MSH in both experiments. Finally, we validated critical parameters of our MSH model through a cross-validation method and a third experiment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Different foveal schisis patterns in each retinal layer in eyes with hereditary juvenile retinoschisis evaluated by en-face optical coherence tomography.

    PubMed

    Yoshida-Uemura, Tomoyo; Katagiri, Satoshi; Yokoi, Tadashi; Nishina, Sachiko; Azuma, Noriyuki

    2017-04-01

    To analyze the structures of schisis in eyes with hereditary juvenile retinoschisis using en-face optical coherence tomography (OCT) imaging. In this retrospective observational study, we reviewed the medical records of patients with hereditary juvenile retinoschisis who underwent comprehensive ophthalmic examinations including swept-source OCT. OCT images were obtained from 16 eyes of nine boys (mean age ± standard deviation, 10.6 ± 4.0 years). The horizontal OCT images at the fovea showed inner nuclear layer (INL) schisis in one eye (6.3 %), ganglion cell layer (GCL) and INL schisis in 12 eyes (75.0 %), INL and outer plexiform layer (OPL) schisis in two eyes (12.5 %), and GCL, INL, and OPL schisis in one eye (6.3 %). En-face OCT images showed characteristic schisis patterns in each retinal layer, which were represented by multiple hyporeflective holes in the parafoveal region in the GCL, a spoke-like pattern in the foveal region, a reticular pattern in the parafoveal region in the INL, and multiple hyporeflective polygonal cavities with partitions in the OPL. Our results using en-face OCT imaging clarified different patterns of schisis formation among the GCL, INL, and OPL, which lead to further recognition of structure in hereditary juvenile retinoschisis.

  2. REM sleep and emotional face memory in typically-developing children and children with autism.

    PubMed

    Tessier, Sophie; Lambert, Andréane; Scherzer, Peter; Jemel, Boutheina; Godbout, Roger

    2015-09-01

    Relationship between REM sleep and memory was assessed in 13 neurotypical and 13 children with Autistic Spectrum Disorder (ASD). A neutral/positive/negative face recognition task was administered the evening before (learning and immediate recognition) and the morning after (delayed recognition) sleep. The number of rapid eye movements (REMs), beta and theta EEG activity over the visual areas were measured during REM sleep. Compared to neurotypical children, children with ASD showed more theta activity and longer reaction time (RT) for correct responses in delayed recognition of neutral faces. Both groups showed a positive correlation between sleep and performance but different patterns emerged: in neurotypical children, accuracy for recalling neutral faces and overall RT improvement overnight was correlated with EEG activity and REMs; in children with ASD, overnight RT improvement for positive and negative faces correlated with theta and beta activity, respectively. These results suggest that neurotypical and children with ASD use different sleep-related brain networks to process faces. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Social anxiety and detection of facial untrustworthiness: Spatio-temporal oculomotor profiles.

    PubMed

    Gutiérrez-García, Aida; Calvo, Manuel G; Eysenck, Michael W

    2018-04-01

    Cognitive models posit that social anxiety is associated with biased attention to and interpretation of ambiguous social cues as threatening. We investigated attentional bias (selective early fixation on the eye region) to account for the tendency to distrust ambiguous smiling faces with non-happy eyes (interpretative bias). Eye movements and fixations were recorded while observers viewed video-clips displaying dynamic facial expressions. Low (LSA) and high (HSA) socially anxious undergraduates with clinical levels of anxiety judged expressers' trustworthiness. Social anxiety was unrelated to trustworthiness ratings for faces with congruent happy eyes and a smile, and for neutral expressions. However, social anxiety was associated with reduced trustworthiness rating for faces with an ambiguous smile, when the eyes slightly changed to neutrality, surprise, fear, or anger. Importantly, HSA observers looked earlier and longer at the eye region, whereas LSA observers preferentially looked at the smiling mouth region. This attentional bias in social anxiety generalizes to all the facial expressions, while the interpretative bias is specific for ambiguous faces. Such biases are adaptive, as they facilitate an early detection of expressive incongruences and the recognition of untrustworthy expressers (e.g., with fake smiles), with no false alarms when judging truly happy or neutral faces. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. [Dissociated nystagmus in side gaze. Major symptoms in the diagnosis of an internuclear ophthalmoplegia].

    PubMed

    Neugebauer, P; Neugebauer, A; Fricke, J; Michel, O

    2004-07-01

    A prerequisite for a qualified analysis of nystagmus is the recognition of uncommon forms of this condition. In internuclear ophthalmoplegia (INO), a dissociated nystagmus in side gaze is typical. This is accompanied by limited medial excursion of the adducted eye together with a dissociated nystagmus, which is stronger in the abducting fellow eye. This motility disturbance stems from a lesion in the medial longitudinal fasciculus running in the brain stem between the sixth and the third nerve nuclei. The lesion is often due to multiple sclerosis, but can also be ischemic, traumatic, neoplastic or inflammatory (e.g. HIV infection).

  5. A Comparative Study of Standardized Infinity Reference and Average Reference for EEG of Three Typical Brain States

    PubMed Central

    Zheng, Gaoxing; Qi, Xiaoying; Li, Yuzhu; Zhang, Wei; Yu, Yuguo

    2018-01-01

    The choice of different reference electrodes plays an important role in deciphering the functional meaning of electroencephalography (EEG) signals. In recent years, the infinity zero reference using the reference electrode standard technique (REST) has been increasingly applied, while the average reference (AR) was generally advocated as the best available reference option in previous classical EEG studies. Here, we designed EEG experiments and performed a direct comparison between the influences of REST and AR on EEG-revealed brain activity features for three typical brain behavior states (eyes-closed, eyes-open and music-listening). The analysis results revealed the following observations: (1) there is no significant difference in the alpha-wave-blocking effect during the eyes-open state compared with the eyes-closed state for both REST and AR references; (2) there was clear frontal EEG asymmetry during the resting state, and the degree of lateralization under REST was higher than that under AR; (3) the global brain functional connectivity density (FCD) and local FCD have higher values for REST than for AR under different behavior states; and (4) the value of the small-world network characteristic in the eyes-closed state is significantly (in full, alpha, beta and gamma frequency bands) higher than that in the eyes-open state, and the small-world effect under the REST reference is higher than that under AR. In addition, the music-listening state has a higher small-world network effect than the eyes-closed state. The above results suggest that typical EEG features might be more clearly presented by applying the REST reference than by applying AR when using a 64-channel recording. PMID:29593490

  6. A comparison study of visually stimulated brain-computer and eye-tracking interfaces

    NASA Astrophysics Data System (ADS)

    Suefusa, Kaori; Tanaka, Toshihisa

    2017-06-01

    Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.

  7. 20 CFR 408.1235 - How does the State transfer funds to SSA to administer its recognition payment program?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... ADMINISTRATION SPECIAL BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition... end of each calendar month, SSA will provide the State with a statement showing, cumulatively, the... charged by SSA to administer such recognition payments; the State's total liability; and the end-of-month...

  8. 20 CFR 408.1235 - How does the State transfer funds to SSA to administer its recognition payment program?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... ADMINISTRATION SPECIAL BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition... end of each calendar month, SSA will provide the State with a statement showing, cumulatively, the... charged by SSA to administer such recognition payments; the State's total liability; and the end-of-month...

  9. 20 CFR 408.1235 - How does the State transfer funds to SSA to administer its recognition payment program?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... ADMINISTRATION SPECIAL BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition... end of each calendar month, SSA will provide the State with a statement showing, cumulatively, the... charged by SSA to administer such recognition payments; the State's total liability; and the end-of-month...

  10. Representation, Classification and Information Fusion for Robust and Efficient Multimodal Human States Recognition

    ERIC Educational Resources Information Center

    Li, Ming

    2013-01-01

    The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…

  11. Patient Awareness of Cataract and Age-related Macular Degeneration among the Korean Elderly: A Population-based Study.

    PubMed

    Lee, Hankil; Jang, Yong Jung; Lee, Hyung Keun; Kang, Hye Young

    2017-12-01

    Age-related eye disease is often considered part of natural aging. Lack of awareness of eye conditions can result in missed treatment. We investigated the rates of awareness of cataract and age-related macular degeneration, the most common age-related eye-diseases, and the associated factors among elderly Koreans. We identified 7,403 study subjects (≥40 years old) with cataract or age-related macular degeneration based on ophthalmic examination results during the 5th Korean National Health and Nutrition Examination Survey conducted between 2010 and 2012. We assessed whether patients were aware of their eye condition based on a previous diagnosis by a physician. The average awareness rate over the 3-year study period was 23.69% in subjects with cataract and 1.45% in subjects with age-related macular degeneration. Logistic regression analysis showed that patients with cataract were more likely to recognize their condition if they had myopia (odds ratio, 2.08), hyperopia (odds ratio, 1.33), family history of eye disease (odds ratio, 1.44), or a past eye examination (odds ratio, 4.07-29.10). The presence of diabetes mellitus was also a significant predictor of patient awareness of cataract (odds ratio, 1.88). Poor patient recognition of eye disease among the Korean elderly highlights the seriousness of this potential public health problem in our aging society. Pre-existing eye-related conditions and diabetes were significant predictors of awareness; therefore, patients in frequent contact with their doctors have a greater chance of detecting eye disease. © 2017 The Korean Ophthalmological Society

  12. Bayesian microsaccade detection

    PubMed Central

    Mihali, Andra; van Opheusden, Bas; Ma, Wei Ji

    2017-01-01

    Microsaccades are high-velocity fixational eye movements, with special roles in perception and cognition. The default microsaccade detection method is to determine when the smoothed eye velocity exceeds a threshold. We have developed a new method, Bayesian microsaccade detection (BMD), which performs inference based on a simple statistical model of eye positions. In this model, a hidden state variable changes between drift and microsaccade states at random times. The eye position is a biased random walk with different velocity distributions for each state. BMD generates samples from the posterior probability distribution over the eye state time series given the eye position time series. Applied to simulated data, BMD recovers the “true” microsaccades with fewer errors than alternative algorithms, especially at high noise. Applied to EyeLink eye tracker data, BMD detects almost all the microsaccades detected by the default method, but also apparent microsaccades embedded in high noise—although these can also be interpreted as false positives. Next we apply the algorithms to data collected with a Dual Purkinje Image eye tracker, whose higher precision justifies defining the inferred microsaccades as ground truth. When we add artificial measurement noise, the inferences of all algorithms degrade; however, at noise levels comparable to EyeLink data, BMD recovers the “true” microsaccades with 54% fewer errors than the default algorithm. Though unsuitable for online detection, BMD has other advantages: It returns probabilities rather than binary judgments, and it can be straightforwardly adapted as the generative model is refined. We make our algorithm available as a software package. PMID:28114483

  13. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion.

    PubMed

    Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.

  14. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion

    PubMed Central

    Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699

  15. Quantifying Novice and Expert Differences in Visual Diagnostic Reasoning in Veterinary Pathology Using Eye-Tracking Technology.

    PubMed

    Warren, Amy L; Donnon, Tyrone L; Wagg, Catherine R; Priest, Heather; Fernandez, Nicole J

    2018-01-18

    Visual diagnostic reasoning is the cognitive process by which pathologists reach a diagnosis based on visual stimuli (cytologic, histopathologic, or gross imagery). Currently, there is little to no literature examining visual reasoning in veterinary pathology. The objective of the study was to use eye tracking to establish baseline quantitative and qualitative differences between the visual reasoning processes of novice and expert veterinary pathologists viewing cytology specimens. Novice and expert participants were each shown 10 cytology images and asked to formulate a diagnosis while wearing eye-tracking equipment (10 slides) and while concurrently verbalizing their thought processes using the think-aloud protocol (5 slides). Compared to novices, experts demonstrated significantly higher diagnostic accuracy (p<.017), shorter time to diagnosis (p<.017), and a higher percentage of time spent viewing areas of diagnostic interest (p<.017). Experts elicited more key diagnostic features in the think-aloud protocol and had more efficient patterns of eye movement. These findings suggest that experts' fast time to diagnosis, efficient eye-movement patterns, and preference for viewing areas of interest supports system 1 (pattern-recognition) reasoning and script-inductive knowledge structures with system 2 (analytic) reasoning to verify their diagnosis.

  16. Increasing the information acquisition volume in iris recognition systems.

    PubMed

    Barwick, D Shane

    2008-09-10

    A significant hurdle for the widespread adoption of iris recognition in security applications is that the typically small imaging volume for eye placement results in systems that are not user friendly. Separable cubic phase plates at the lens pupil have been shown to ameliorate this disadvantage by increasing the depth of field. However, these phase masks have limitations on how efficiently they can capture the information-bearing spatial frequencies in iris images. The performance gains in information acquisition that can be achieved by more general, nonseparable phase masks is demonstrated. A detailed design method is presented, and simulations using representative designs allow for performance comparisons.

  17. Disparities in eye care utilization among the United States adults with visual impairment: findings from the behavioral risk factor surveillance system 2006-2009.

    PubMed

    Chou, Chiu-Fang; Barker, Lawrence E; Crews, John E; Primo, Susan A; Zhang, Xinzhi; Elliott, Amanda F; McKeever Bullard, Kai; Geiss, Linda S; Saaddine, Jinan B

    2012-12-01

    To estimate the prevalence of annual eye care among visually impaired United States residents aged 40 years or older, by state, race/ethnicity, education, and annual income. Cross-sectional study. In analyses of 2006-2009 Behavioral Risk Factor Surveillance System data from 21 states, we used multivariate regression to estimate the state-level prevalence of yearly eye doctor visit in the study population by race/ethnicity (non-Hispanic white, non-Hispanic black, Hispanic, and other), annual income (≥$35,000 and <$35,000), and education (< high school, high school, and > high school). The age-adjusted state-level prevalence of yearly eye doctor visits ranged from 48% (Missouri) to 69% (Maryland). In Alabama, Colorado, Indiana, Iowa, New Mexico, and North Carolina, the prevalence was significantly higher among respondents with more than a high school education than among those with a high school education or less (P < .05). The prevalence was positively associated with annual income levels in Alabama, Georgia, New Mexico, New York, Texas, and West Virginia and negatively associated with annual income levels in Massachusetts. After controlling for age, sex, race/ethnicity, education, and income, we also found significant disparities in the prevalence of yearly eye doctor visits among states. Among visually impaired US residents aged 40 or older, the prevalence of yearly eye examinations varied significantly by race/ethnicity, income, and education, both overall and within states. Continued and possibly enhanced collection of eye care utilization data, such as we analyzed here, may help states address disparities in vision health and identify population groups most in need of intervention programs. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. It Takes Time to Prime: Semantic Priming in the Ocular Lexical Decision Task

    PubMed Central

    Hoedemaker, Renske S.; Gordon, Peter C.

    2014-01-01

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDT) was replaced with an eye-movement response through a sequence of three words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative LD on each word in the triplet. In Experiment 2, LD responses were delayed until all three letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, while limited during text reading as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of τ, meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases where the LD is difficult as indicated by longer response times. Compared to the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT. PMID:25181368

  19. Off-Angle Iris Correction Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Thompson, Joseph T; Karakaya, Mahmut

    In many real world iris recognition systems obtaining consistent frontal images is problematic do to inexperienced or uncooperative users, untrained operators, or distracting environments. As a result many collected images are unusable by modern iris matchers. In this chapter we present four methods for correcting off-angle iris images to appear frontal which makes them compatible with existing iris matchers. The methods include an affine correction, a retraced model of the human eye, measured displacements, and a genetic algorithm optimized correction. The affine correction represents a simple way to create an iris image that appears frontal but it does not accountmore » for refractive distortions of the cornea. The other method account for refraction. The retraced model simulates the optical properties of the cornea. The other two methods are data driven. The first uses optical flow to measure the displacements of the iris texture when compared to frontal images of the same subject. The second uses a genetic algorithm to learn a mapping that optimizes the Hamming Distance scores between off-angle and frontal images. In this paper we hypothesize that the biological model presented in our earlier work does not adequately account for all variations in eye anatomy and therefore the two data-driven approaches should yield better performance. Results are presented using the commercial VeriEye matcher that show that the genetic algorithm method clearly improves over prior work and makes iris recognition possible up to 50 degrees off-angle.« less

  20. Lune/eye gone, a Pax-like protein, uses a partial paired domain and a homeodomain for DNA recognition.

    PubMed

    Jun, S; Wallen, R V; Goriely, A; Kalionis, B; Desplan, C

    1998-11-10

    Pax proteins, characterized by the presence of a paired domain, play key regulatory roles during development. The paired domain is a bipartite DNA-binding domain that contains two helix-turn-helix domains joined by a linker region. Each of the subdomains, the PAI and RED domains, has been shown to be a distinct DNA-binding domain. The PAI domain is the most critical, but in specific circumstances, the RED domain is involved in DNA recognition. We describe a Pax protein, originally called Lune, that is the product of the Drosophila eye gone gene (eyg). It is unique among Pax proteins, because it contains only the RED domain. eyg seems to play a role both in the organogenesis of the salivary gland during embryogenesis and in the development of the eye. A high-affinity binding site for the Eyg RED domain was identified by using systematic evolution of ligands by exponential enrichment techniques. This binding site is related to a binding site previously identified for the RED domain of the Pax-6 5a isoform. Eyg also contains another DNA-binding domain, a Prd-class homeodomain (HD), whose palindromic binding site is similar to other Prd-class HDs. The ability of Pax proteins to use the PAI, RED, and HD, or combinations thereof, may be one mechanism that allows them to be used at different stages of development to regulate various developmental processes through the activation of specific target genes.

  1. Lune/eye gone, a Pax-like protein, uses a partial paired domain and a homeodomain for DNA recognition

    PubMed Central

    Jun, Susie; Wallen, Robert V.; Goriely, Anne; Kalionis, Bill; Desplan, Claude

    1998-01-01

    Pax proteins, characterized by the presence of a paired domain, play key regulatory roles during development. The paired domain is a bipartite DNA-binding domain that contains two helix–turn–helix domains joined by a linker region. Each of the subdomains, the PAI and RED domains, has been shown to be a distinct DNA-binding domain. The PAI domain is the most critical, but in specific circumstances, the RED domain is involved in DNA recognition. We describe a Pax protein, originally called Lune, that is the product of the Drosophila eye gone gene (eyg). It is unique among Pax proteins, because it contains only the RED domain. eyg seems to play a role both in the organogenesis of the salivary gland during embryogenesis and in the development of the eye. A high-affinity binding site for the Eyg RED domain was identified by using systematic evolution of ligands by exponential enrichment techniques. This binding site is related to a binding site previously identified for the RED domain of the Pax-6 5a isoform. Eyg also contains another DNA-binding domain, a Prd-class homeodomain (HD), whose palindromic binding site is similar to other Prd-class HDs. The ability of Pax proteins to use the PAI, RED, and HD, or combinations thereof, may be one mechanism that allows them to be used at different stages of development to regulate various developmental processes through the activation of specific target genes. PMID:9811867

  2. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    PubMed

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  3. Utilizing a State Level Volunteer Recognition Program at the County Level

    ERIC Educational Resources Information Center

    McCall, Fran Korthaus; Culp, Ken, III

    2013-01-01

    Volunteer recognition is an important component of Extension programs. Most land-grant universities have implemented a state volunteer recognition program. Extension professionals, however, are too overburdened with meetings, programs, and activities to effectively recognize volunteers locally. Utilizing a state model is an efficient means of…

  4. 20 CFR 408.1230 - Can you waive State recognition payments?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Can you waive State recognition payments? 408.1230 Section 408.1230 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SPECIAL BENEFITS FOR CERTAIN WORLD WAR II VETERANS Federal Administration of State Recognition Payments § 408.1230 Can you waive...

  5. Using Singular Value Decomposition to Investigate Degraded Chinese Character Recognition: Evidence from Eye Movements during Reading

    ERIC Educational Resources Information Center

    Wang, Hsueh-Cheng; Schotter, Elizabeth R.; Angele, Bernhard; Yang, Jinmian; Simovici, Dan; Pomplun, Marc; Rayner, Keith

    2013-01-01

    Previous research indicates that removing initial strokes from Chinese characters makes them harder to read than removing final or internal ones. In the present study, we examined the contribution of important components to character configuration via singular value decomposition. The results indicated that when the least important segments, which…

  6. When Half a Word Is Enough: Infants Can Recognize Spoken Words Using Partial Phonetic Information.

    ERIC Educational Resources Information Center

    Fernald, Anne; Swingley, Daniel; Pinto, John P.

    2001-01-01

    Two experiments tracked infants' eye movements to examine use of word-initial information to understand fluent speech. Results indicated that 21- and 18-month-olds recognized partial words as quickly and reliably as whole words. Infants' productive vocabulary and reaction time were related to word recognition accuracy. Results show that…

  7. From containers to catalysts: supramolecular catalysis within cucurbiturils.

    PubMed

    Pemberton, Barry C; Raghunathan, Ramya; Volla, Sabine; Sivaguru, Jayaraman

    2012-09-24

    Cucurbiturils are a family of molecular container compounds with superior molecular recognition properties. The use of cucurbiturils for supramolecular catalysis is highlighted in this concept. Both photochemical reactions as well as thermal transformations are reviewed with an eye towards tailoring substrates for supramolecular catalysis mediated by cucurbiturils. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Accurate Memory for Object Location by Individuals with Intellectual Disability: Absolute Spatial Tagging Instead of Configural Processing?

    ERIC Educational Resources Information Center

    Giuliani, Fabienne; Favrod, Jerome; Grasset, Francois; Schenk, Francoise

    2011-01-01

    Using head-mounted eye tracker material, we assessed spatial recognition abilities (e.g., reaction to object permutation, removal or replacement with a new object) in participants with intellectual disabilities. The "Intellectual Disabilities (ID)" group (n = 40) obtained a score totalling a 93.7% success rate, whereas the "Normal Control" group…

  9. The Influence of Bilingualism on the Preference for the Mouth Region of Dynamic Faces

    ERIC Educational Resources Information Center

    Ayneto, Alba; Sebastian-Galles, Nuria

    2017-01-01

    Bilingual infants show an extended period of looking at the mouth of talking faces, which provides them with additional articulatory cues that can be used to boost the challenging situation of learning two languages (Pons, Bosch & Lewkowicz, 2015). However, the eye region also provides fundamental cues for emotion perception and recognition,…

  10. Fear Recognition Impairment in Early-Stage Alzheimer's Disease: When Focusing on the Eyes Region Improves Performance

    ERIC Educational Resources Information Center

    Hot, Pascal; Klein-Koerkamp, Yanica; Borg, Celine; Richard-Mornas, Aurelie; Zsoldos, Isabella; Adeline, Adeline Paignon; Anterion, Catherine Thomas; Baciu, Monica

    2013-01-01

    A decline in the ability to identify fearful expression has been frequently reported in patients with Alzheimer's disease (AD). In patients with severe destruction of the bilateral amygdala, similar difficulties have been reduced by using an explicit visual exploration strategy focusing on gaze. The current study assessed the possibility of…

  11. The Influence of Contextual Diversity on Eye Movements in Reading

    ERIC Educational Resources Information Center

    Plummer, Patrick; Perea, Manuel; Rayner, Keith

    2014-01-01

    Recent research has shown contextual diversity (i.e., the number of passages in which a given word appears) to be a reliable predictor of word processing difficulty. It has also been demonstrated that word-frequency has little or no effect on word recognition speed when accounting for contextual diversity in isolated word processing tasks. An…

  12. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  13. Anterior ischemic optic neuropathy in a patient with Churg-Strauss syndrome.

    PubMed

    Lee, Ji Eun; Lee, Seung Uk; Kim, Soo Young; Jang, Tae Won; Lee, Sang Joon

    2012-12-01

    We describe a patient with Churg-Strauss syndrome who developed unilateral anterior ischemic optic neuropathy. A 54-year-old man with a history of bronchial asthma, allergic rhinitis, and sinusitis presented with sudden decreased visual acuity in his right eye that had begun 2 weeks previously. The visual acuity of his right eye was 20 / 50. Ophthalmoscopic examination revealed a diffusely swollen right optic disc and splinter hemorrhages at its margin. Goldmann perimetry showed central scotomas in the right eye and fluorescein angiography showed remarkable hyperfluorescence of the right optic nerve head. Marked peripheral eosinphilia, extravascular eosinophils in a bronchial biopsy specimen, and an increased sedimentation rate supported the diagnosis of Churg-Strauss syndrome. Therapy with methylprednisolone corrected the laboratory abnormalities, improved clinical features, and preserved vision, except for the right central visual field defect. Early recognition of this systemic disease by ophthalmologists may help in preventing severe ocular complications.

  14. Anterior Ischemic Optic Neuropathy in a Patient with Churg-Strauss Syndrome

    PubMed Central

    Lee, Ji Eun; Lee, Seung Uk; Kim, Soo Young; Jang, Tae Won

    2012-01-01

    We describe a patient with Churg-Strauss syndrome who developed unilateral anterior ischemic optic neuropathy. A 54-year-old man with a history of bronchial asthma, allergic rhinitis, and sinusitis presented with sudden decreased visual acuity in his right eye that had begun 2 weeks previously. The visual acuity of his right eye was 20 / 50. Ophthalmoscopic examination revealed a diffusely swollen right optic disc and splinter hemorrhages at its margin. Goldmann perimetry showed central scotomas in the right eye and fluorescein angiography showed remarkable hyperfluorescence of the right optic nerve head. Marked peripheral eosinphilia, extravascular eosinophils in a bronchial biopsy specimen, and an increased sedimentation rate supported the diagnosis of Churg-Strauss syndrome. Therapy with methylprednisolone corrected the laboratory abnormalities, improved clinical features, and preserved vision, except for the right central visual field defect. Early recognition of this systemic disease by ophthalmologists may help in preventing severe ocular complications. PMID:23204805

  15. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  16. Psychopathic traits affect the visual exploration of facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Self-organization and information in biosystems: a case study.

    PubMed

    Haken, Hermann

    2018-05-01

    Eigen's original molecular evolution equations are extended in two ways. (1) By an additional nonlinear autocatalytic term leading to new stability features, their dependence on the relative size of fitness parameters and on initial conditions is discussed in detail. (2) By adding noise terms that represent the spontaneous generation of molecules by mutations of substrate molecules, these terms are taken care of by both Langevin and Fokker-Planck equations. The steady-state solution of the latter provides us with a potential landscape giving a bird's eye view on all stable states (attractors). Two different types of evolutionary processes are suggested: (a) in a fixed attractor landscape and (b) caused by a changed landscape caused by changed fitness parameters. This may be related to Gould's concept of punctuated equilibria. External signals in the form of additional molecules may generate a new initial state within a specific basin of attraction. The corresponding attractor is then reached by self-organization. This approach allows me to define pragmatic information as signals causing a specific reaction of the receiver and to use equations equivalent to (1) as model of (human) pattern recognition as substantiated by the synergetic computer.

  18. Early prediction of student goals and affect in narrative-centered learning environments

    NASA Astrophysics Data System (ADS)

    Lee, Sunyoung

    Recent years have seen a growing recognition of the role of goal and affect recognition in intelligent tutoring systems. Goal recognition is the task of inferring users' goals from a sequence of observations of their actions. Because of the uncertainty inherent in every facet of human computer interaction, goal recognition is challenging, particularly in contexts in which users can perform many actions in any order, as is the case with intelligent tutoring systems. Affect recognition is the task of identifying the emotional state of a user from a variety of physical cues, which are produced in response to affective changes in the individual. Accurately recognizing student goals and affect states could contribute to more effective and motivating interactions in intelligent tutoring systems. By exploiting knowledge of student goals and affect states, intelligent tutoring systems can dynamically modify their behavior to better support individual students. To create effective interactions in intelligent tutoring systems, goal and affect recognition models should satisfy two key requirements. First, because incorrectly predicted goals and affect states could significantly diminish the effectiveness of interactive systems, goal and affect recognition models should provide accurate predictions of user goals and affect states. When observations of users' activities become available, recognizers should make accurate early" predictions. Second, goal and affect recognition models should be highly efficient so they can operate in real time. To address key issues, we present an inductive approach to recognizing student goals and affect states in intelligent tutoring systems by learning goals and affect recognition models. Our work focuses on goal and affect recognition in an important new class of intelligent tutoring systems, narrative-centered learning environments. We report the results of empirical studies of induced recognition models from observations of students' interactions in narrative-centered learning environments. Experimental results suggest that induced models can make accurate early predictions of student goals and affect states, and they are sufficiently efficient to meet the real-time performance requirements of interactive learning environments.

  19. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  20. Quantitative analysis on electrooculography (EOG) for neurodegenerative disease

    NASA Astrophysics Data System (ADS)

    Liu, Chang-Chia; Chaovalitwongse, W. Art; Pardalos, Panos M.; Seref, Onur; Xanthopoulos, Petros; Sackellares, J. C.; Skidmore, Frank M.

    2007-11-01

    Many studies have documented abnormal horizontal and vertical eye movements in human neurodegenerative disease as well as during altered states of consciousness (including drowsiness and intoxication) in healthy adults. Eye movement measurement may play an important role measuring the progress of neurodegenerative diseases and state of alertness in healthy individuals. There are several techniques for measuring eye movement, Infrared detection technique (IR). Video-oculography (VOG), Scleral eye coil and EOG. Among those available recording techniques, EOG is a major source for monitoring the abnormal eye movement. In this real-time quantitative analysis study, the methods which can capture the characteristic of the eye movement were proposed to accurately categorize the state of neurodegenerative subjects. The EOG recordings were taken while 5 tested subjects were watching a short (>120 s) animation clip. In response to the animated clip the participants executed a number of eye movements, including vertical smooth pursued (SVP), horizontal smooth pursued (HVP) and random saccades (RS). Detection of abnormalities in ocular movement may improve our diagnosis and understanding a neurodegenerative disease and altered states of consciousness. A standard real-time quantitative analysis will improve detection and provide a better understanding of pathology in these disorders.

  1. Fine-grained recognition of plants from images.

    PubMed

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.

  2. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  3. Noisy Ocular Recognition Based on Three Convolutional Neural Networks.

    PubMed

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-12-17

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user's eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods.

  4. Research on Attribute Reduction in Hoisting Motor State Recognition of Quayside Container Crane

    NASA Astrophysics Data System (ADS)

    Li, F.; Tang, G.; Hu, X.

    2017-07-01

    In view of too many attributes in hoisting motor state recognition of quayside container crane. Attribute reduction method based on discernibility matrix is introduced to attribute reduction of lifting motor state information table. A method of attribute reduction based on the combination of rough set and genetic algorithm is proposed to deal with the hoisting motor state decision table. Under the condition that the information system's decision-making ability is unchanged, the redundant attribute is deleted. Which reduces the complexity and computation of the recognition process of the hoisting motor. It is possible to realize the fast state recognition.

  5. The effects of state anxiety and thermal comfort on sleep quality and eye fatigue in shift work nurses.

    PubMed

    Dehghan, Habibollah; Azmoon, Hiva; Souri, Shiva; Akbari, Jafar

    2014-01-01

    Psychological problems as state anxiety (SA) in the work environment has negative effect on the employees life especially shift work nurses, i.e. negative effect on mental and physical health (sleep quality, eye fatigue and comfort thermal). The purpose of this study was determination of effects of state anxiety and thermal comfort on sleep quality and eye fatigue in shift work nurses. This cross-sectional research conducted on 82 shift-work personnel of 18 nursing workstations of Isfahan hospitals in 2012. To measure the SA, sleep quality, visual fatigue and thermal comfort, Spielberger state-trait anxiety inventory, Pittsburg sleep quality index, eye fatigue questionnaire and thermal comfort questionnaire were used respectively. The data were analyzed with descriptive statistics, student test and correlation analysis. Correlation between SA and sleep quality was -0.664(P < 0001), Pearson correlation between SA and thermal comfort was -0.276(P = 0.016) and between SA and eye fatigue was 0.57 (P < 0001). Based on these results, it can be concluded that improvement of thermal conditions and reduce state anxiety level can be reduce eye fatigue and increase the sleep quality in shift work nurses.

  6. The effects of state anxiety and thermal comfort on sleep quality and eye fatigue in shift work nurses

    PubMed Central

    Dehghan, Habibollah; Azmoon, Hiva; Souri, Shiva; Akbari, Jafar

    2014-01-01

    Psychological problems as state anxiety (SA) in the work environment has negative effect on the employees life especially shift work nurses, i.e. negative effect on mental and physical health (sleep quality, eye fatigue and comfort thermal). The purpose of this study was determination of effects of state anxiety and thermal comfort on sleep quality and eye fatigue in shift work nurses. Methods: This cross-sectional research conducted on 82 shift-work personnel of 18 nursing workstations of Isfahan hospitals in 2012. To measure the SA, sleep quality, visual fatigue and thermal comfort, Spielberger state-trait anxiety inventory, Pittsburg sleep quality index, eye fatigue questionnaire and thermal comfort questionnaire were used respectively. The data were analyzed with descriptive statistics, student test and correlation analysis. Results: Correlation between SA and sleep quality was −0.664(P < 0001), Pearson correlation between SA and thermal comfort was −0.276(P = 0.016) and between SA and eye fatigue was 0.57 (P < 0001). Conclusion: Based on these results, it can be concluded that improvement of thermal conditions and reduce state anxiety level can be reduce eye fatigue and increase the sleep quality in shift work nurses. PMID:25077165

  7. 355 Ocular Muscles Myopathy Associated with Autoimmune Thyroiditis. Case Reports

    PubMed Central

    Vargas-Camaño, Eugenia; Castrejon-Vázquez, Isabel; Plazola-Hernández, Sara I.; Moguel-Ancheita, Silvia

    2012-01-01

    Background Thyroid-associated orbitopathy is commonly associated with Graves' disease with lid retraction, exophthalmos, and periorbital swelling, but rarely with autoimmune thyroiditis or euthyroid state. We reviewed 3 cases from our hospital whose antibodies to anti-receptor of TSH were normal. Methods Case 1: 60 year-old non-diabetic woman with bilateral glaucoma in treatment, recurrent media otitis and euthyroidism, acute onset of painless diplopia, and lid ptosis in the left eye. MRI of orbit showed increased size of the III right cranial pair and high levels of thyroid autoantibodies (Tab) anti-tiroglobulin (ATG) 115.1, anti-thyroid peroxidase (ATPO) 1751 U/mL. She started oral deflazacort 30 mg each 3 days. Sixty days later, complete remission of eye symptoms correlated with lower auto-antibodies level (ATG 19 ATPO 117). Case 2: 10 year-old girl. At age 8, she had diplopia, lid ptosis and limitations of upper gaze in the left eye. The neurological study discarded ocular myasthenia; with thyroid goitier, and hypothyrodism, she started oral levothyroxin. At age 10 with normal IRM Botulinic toxin was injected, without change. High levels of Tab were found, ATG 2723, ATPO 10.7. She started oral deflazacort 30 mg each 3 days, azathioprin 100 mg, daily. Actually, Tab levels are almost normal, but she remains with ocular alterations. Case 3: 56 year-old woman, Grave´s disease with exophtalmos in 1990, treated with I131 and immunosupression, with good outcome; obesity, hypertension and bilateral glaucoma in treatment. She suddenly presented diplopia and IV pair paresia of the right eye. A year later, ATb were found slightly elevated, ATG 100 years ATPO 227; despite prednisone 50 mg, each 3 days and azathioprin 150 mg/daily treatment, a surgical procedure was required for relieve the ocular symptoms. Results We found only 3 cases previously reported with this type of eye thyroid disease. Is important to note that awareness of this atypical form of orbitopathy Conclusions Early recognition facilitates successful treatment (Case 1) or persistent disease when diagnosis is delayed (Cases 2 and 3).

  8. Role of adolescent and maternal depressive symptoms on transactional emotion recognition: context and state affect matter.

    PubMed

    Luebbe, Aaron M; Fussner, Lauren M; Kiel, Elizabeth J; Early, Martha C; Bell, Debora J

    2013-12-01

    Depressive symptomatology is associated with impaired recognition of emotion. Previous investigations have predominantly focused on emotion recognition of static facial expressions neglecting the influence of social interaction and critical contextual factors. In the current study, we investigated how youth and maternal symptoms of depression may be associated with emotion recognition biases during familial interactions across distinct contextual settings. Further, we explored if an individual's current emotional state may account for youth and maternal emotion recognition biases. Mother-adolescent dyads (N = 128) completed measures of depressive symptomatology and participated in three family interactions, each designed to elicit distinct emotions. Mothers and youth completed state affect ratings pertaining to self and other at the conclusion of each interaction task. Using multiple regression, depressive symptoms in both mothers and adolescents were associated with biased recognition of both positive affect (i.e., happy, excited) and negative affect (i.e., sadness, anger, frustration); however, this bias emerged primarily in contexts with a less strong emotional signal. Using actor-partner interdependence models, results suggested that youth's own state affect accounted for depression-related biases in their recognition of maternal affect. State affect did not function similarly in explaining depression-related biases for maternal recognition of adolescent emotion. Together these findings suggest a similar negative bias in emotion recognition associated with depressive symptoms in both adolescents and mothers in real-life situations, albeit potentially driven by different mechanisms.

  9. Outcome of solid-state 532 nm green laser in high-risk retinopathy of prematurity at a tertiary care centre in India.

    PubMed

    Chhabra, Kanika; Kaur, Prempal; Singh, Karamjit; Aggarwal, Anand; Chalia, Dharamvir

    2018-02-01

    The purpose of this study was to analyse the outcome of solid-state green laser in high-risk retinopathy of prematurity (ROP) at a tertiary centre in India. Fifty-nine eyes of 30 infants with high-risk ROP were recruited in this prospective, interventional study. High-risk ROP included prethreshold type 1 ROP and APROP. Laser photocoagulation was performed with 532 nm solid-state green laser (Novus Spectra, Lumenis, GmbH, Germany). Of the 30 infants, 18 were males (60%) and 12 were females (40%). The mean birth weight was 1102.83 ± 196.27 g. The mean gestational age was 29.5 ± 1.47 weeks. Zone 1 disease was present in 10 eyes (16.95%) and zone 2 disease in 49 (83.05%) eyes. Out of 57 eyes with prethreshold type 1 ROP, 39 eyes (68.42%) had stage 2 and 18 eyes (31.58%) had stage 3. The postconceptional age at the time of treatment was 36.03 ± 2.32 weeks. The infants received mean 2710.24 ± 747.97 laser spots. Fifty (84.8%) eyes underwent laser in a single sitting and 9 eyes (15.2%) required 2 laser sittings. Mean time for regression of ROP was 5.8 ± 3.8 weeks (range 3-11 weeks). Total ROP regression was seen in 55 eyes (93.22%). Despite laser treatment, 4 (6.78%) eyes of three infants had unfavourable outcome. One infant developed intra-procedural bradycardia. Vitreous haemorrhage was seen in five eyes (8.4%). Solid-state 532 nm green laser is a safe and effective treatment for high-risk retinopathy of prematurity.

  10. 42 CFR 403.322 - Termination of agreements for Medicare recognition of State systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Termination of agreements for Medicare recognition of State systems. 403.322 Section 403.322 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS SPECIAL PROGRAMS AND PROJECTS Recognition of State...

  11. Mechanisms and neural basis of object and pattern recognition: a study with chess experts.

    PubMed

    Bilalić, Merim; Langner, Robert; Erb, Michael; Grodd, Wolfgang

    2010-11-01

    Comparing experts with novices offers unique insights into the functioning of cognition, based on the maximization of individual differences. Here we used this expertise approach to disentangle the mechanisms and neural basis behind two processes that contribute to everyday expertise: object and pattern recognition. We compared chess experts and novices performing chess-related and -unrelated (visual) search tasks. As expected, the superiority of experts was limited to the chess-specific task, as there were no differences in a control task that used the same chess stimuli but did not require chess-specific recognition. The analysis of eye movements showed that experts immediately and exclusively focused on the relevant aspects in the chess task, whereas novices also examined irrelevant aspects. With random chess positions, when pattern knowledge could not be used to guide perception, experts nevertheless maintained an advantage. Experts' superior domain-specific parafoveal vision, a consequence of their knowledge about individual domain-specific symbols, enabled improved object recognition. Functional magnetic resonance imaging corroborated this differentiation between object and pattern recognition and showed that chess-specific object recognition was accompanied by bilateral activation of the occipitotemporal junction, whereas chess-specific pattern recognition was related to bilateral activations in the middle part of the collateral sulci. Using the expertise approach together with carefully chosen controls and multiple dependent measures, we identified object and pattern recognition as two essential cognitive processes in expert visual cognition, which may also help to explain the mechanisms of everyday perception.

  12. Iris Recognition Using Feature Extraction of Box Counting Fractal Dimension

    NASA Astrophysics Data System (ADS)

    Khotimah, C.; Juniati, D.

    2018-01-01

    Biometrics is a science that is now growing rapidly. Iris recognition is a biometric modality which captures a photo of the eye pattern. The markings of the iris are distinctive that it has been proposed to use as a means of identification, instead of fingerprints. Iris recognition was chosen for identification in this research because every human has a special feature that each individual is different and the iris is protected by the cornea so that it will have a fixed shape. This iris recognition consists of three step: pre-processing of data, feature extraction, and feature matching. Hough transformation is used in the process of pre-processing to locate the iris area and Daugman’s rubber sheet model to normalize the iris data set into rectangular blocks. To find the characteristics of the iris, it was used box counting method to get the fractal dimension value of the iris. Tests carried out by used k-fold cross method with k = 5. In each test used 10 different grade K of K-Nearest Neighbor (KNN). The result of iris recognition was obtained with the best accuracy was 92,63 % for K = 3 value on K-Nearest Neighbor (KNN) method.

  13. Bilateral corneal perforations and autoproptosis as self-induced manifestations of ocular Munchausen's syndrome.

    PubMed

    Lin, Joseph L; Servat, J Javier; Bernardino, Carlo R; Goldberg, Robert A; Levin, Flora

    2012-08-01

    To report a patient with bilateral corneal perforations and autoproptosis in a case of ocular Munchausen's syndrome. Case report. A 26-year-old white male referred to the oculoplastics service with one month history of decreased vision bilaterally and painful right eye. Multiple eyelid scars and right corneal opacity were noted. The patient was previously seen at another institution for rapid loss of vision in both eyes. An orbit decompression among many procedures failed to controlled extreme pain and proptosis. Resolution of proptosis, stabilization of vision, pain resolution. Three weeks after enucleation of the right eye was offered, patient presented with spontaneous left ruptured globe. After multiple episodes of self-mutilation and infections, both eyes were exenterated. Munchausen syndrome can be seen with ophthalmic manifestations and should be considered in the differential diagnosis when ocular abnormalities cannot be explained after a thorough evaluation. Recognition of this psychiatric disease is not only important for correct medical diagnosis and treatment, but also essential in protecting the patients from unnecessary invasive and aggressive medical procedures.

  14. [A new system of testing visual performance based on the cylindrical lens screen].

    PubMed

    Doege, E; Krause, O

    1983-09-01

    Using a special microoptical screen as a test-picture coating, a method for testing binocular function was developed. It offers the advantage of providing a separate visual impression to each eye from a diagnostic picture without using any device in front of the eyes. The person tested is unaware of this procedure, of which the diagnostic plate gives no hint. In addition to a description of its numerous uses and diagnostic possibilities, fusion pictures suitable for screening tests are described: Each eye is offered a separate impression with a completely different content. If fusion occurs correctly, a third motif with an entirely new meaning emerges. Several years of experience with this effective system (naked-eye tests) resulted in aids which are listed in the final section of the paper: exercise aids used for preparing the persons tested (especially infants) in the waiting room, recognition aids for the examination, and a partially kinetic picture for rapid, simple and very convincing representation of adjusting movements and of the squint position in cases of concomitant squint.

  15. Some effects of alcohol and eye movements on cross-race face learning.

    PubMed

    Harvey, Alistair J

    2014-01-01

    This study examines the impact of acute alcohol intoxication on visual scanning in cross-race face learning. The eye movements of a group of white British participants were recorded as they encoded a series of own-and different-race faces, under alcohol and placebo conditions. Intoxication reduced the rate and extent of visual scanning during face encoding, reorienting the focus of foveal attention away from the eyes and towards the nose. Differences in encoding eye movements also varied between own-and different-race face conditions as a function of alcohol. Fixations to both face types were less frequent and more lingering following intoxication, but in the placebo condition this was only the case for different-race faces. While reducing visual scanning, however, alcohol had no adverse effect on memory, only encoding restrictions associated with sober different-race face processing led to poorer recognition. These results support perceptual expertise accounts of own-race face processing, but suggest the adverse effects of alcohol on face learning published previously are not caused by foveal encoding restrictions. The implications of these findings for alcohol myopia theory are discussed.

  16. A Method for Recognizing State of Finger Flexure and Extension

    NASA Astrophysics Data System (ADS)

    Terado, Toshihiko; Fujiwara, Osamu

    In our country, the handicapped and the elderly people in bed increase rapidly. In the bedridden person’s daily life, there may be limitations in the physical movement and the means of mutual communication. For the support of their comfortable daily lives, therefore, the development of human interface equipment becomes an important task. The equipment of this kind is being already developed by means of laser beam, eye-tracking, breathing motion and myo-electric signals, while the attachment and handling are normally not so easy. In this study, paying attention to finger motion, we have developed human interface equipment easily attached to the body, which enables one to measure the finger flexure and extension for mutual communication. The state of finger flexure and extension is identified by a threshold level analysis from the 3D-locus data for the finger movement, which can be measured through the infrared rays from the LED markers attached to a glove with the previously developed prototype system. We then have confirmed from an experiment that nearly 100% recognition for the finger movement can be achieved.

  17. The coupling of emotion and cognition in the eye: introducing the pupil old/new effect.

    PubMed

    Võ, Melissa L-H; Jacobs, Arthur M; Kuchinke, Lars; Hofmann, Markus; Conrad, Markus; Schacht, Annekathrin; Hutzler, Florian

    2008-01-01

    The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented ("old") or not ("new"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.

  18. Enantiomer analysis of chiral carboxylic acids by AIE molecules bearing optically pure aminol groups.

    PubMed

    Zheng, Yan-Song; Hu, Yu-Jian; Li, Dong-Mi; Chen, Yi-Chang

    2010-01-15

    Pure enantiomers of carboxylic acids are a class of important biomolecules, chiral drugs, chiral reagents, etc. Analysis of the enantiomers usually needs expensive instrument or complex chiral receptors. However, to develop simple and reliable methods for the enantiomer analysis of acids is difficult. In this paper, chiral recognition of 2,3-dibenzoyltartaric acid and mandelic acid was first carried out by aggregation-induced emission molecules bearing optically pure aminol group, which was easily synthesized. The chiral recognition is not only seen by naked eyes but also measured by fluorophotometer. The difference of fluorescence intensity between the two enantiomers of the acids aroused by the aggregation-induced emission molecules was up to 598. The chiral recognition could be applied to quantitative analysis of enantiomer content of chiral acids. More chiral AIE amines need to be developed for enantiomer analysis of more carboxylic acids.

  19. Optimal wavelength band clustering for multispectral iris recognition.

    PubMed

    Gong, Yazhuo; Zhang, David; Shi, Pengfei; Yan, Jingqi

    2012-07-01

    This work explores the possibility of clustering spectral wavelengths based on the maximum dissimilarity of iris textures. The eventual goal is to determine how many bands of spectral wavelengths will be enough for iris multispectral fusion and to find these bands that will provide higher performance of iris multispectral recognition. A multispectral acquisition system was first designed for imaging the iris at narrow spectral bands in the range of 420 to 940 nm. Next, a set of 60 human iris images that correspond to the right and left eyes of 30 different subjects were acquired for an analysis. Finally, we determined that 3 clusters were enough to represent the 10 feature bands of spectral wavelengths using the agglomerative clustering based on two-dimensional principal component analysis. The experimental results suggest (1) the number, center, and composition of clusters of spectral wavelengths and (2) the higher performance of iris multispectral recognition based on a three wavelengths-bands fusion.

  20. Making sense of self-conscious emotion: linking theory of mind and emotion in children with autism.

    PubMed

    Heerey, Erin A; Keltner, Dacher; Capps, Lisa M

    2003-12-01

    Self-conscious emotions such as embarrassment and shame are associated with 2 aspects of theory of mind (ToM): (a) the ability to understand that behavior has social consequences in the eyes of others and (b) an understanding of social norms violations. The present study aimed to link ToM with the recognition of self-conscious emotion. Children with and without autism identified facial expressions conscious of self-conscious and non-self-conscious emotions from photographs. ToM was also measured. Children with autism performed more poorly than comparison children at identifying self-conscious emotions, though they did not differ in the recognition of non-self-conscious emotions. When ToM ability was statistically controlled, group differences in the recognition of self-conscious emotion disappeared. Discussion focused on the links between ToM and self-conscious emotion.

  1. Selective REM-sleep deprivation does not diminish emotional memory consolidation in young healthy subjects.

    PubMed

    Morgenthaler, Jarste; Wiesner, Christian D; Hinze, Karoline; Abels, Lena C; Prehn-Kristensen, Alexander; Göder, Robert

    2014-01-01

    Sleep enhances memory consolidation and it has been hypothesized that rapid eye movement (REM) sleep in particular facilitates the consolidation of emotional memory. The aim of this study was to investigate this hypothesis using selective REM-sleep deprivation. We used a recognition memory task in which participants were shown negative and neutral pictures. Participants (N=29 healthy medical students) were separated into two groups (undisturbed sleep and selective REM-sleep deprived). Both groups also worked on the memory task in a wake condition. Recognition accuracy was significantly better for negative than for neutral stimuli and better after the sleep than the wake condition. There was, however, no difference in the recognition accuracy (neutral and emotional) between the groups. In summary, our data suggest that REM-sleep deprivation was successful and that the resulting reduction of REM-sleep had no influence on memory consolidation whatsoever.

  2. Integrative understanding of macular morphologic patterns in diabetic retinopathy based on self-organizing map.

    PubMed

    Murakami, Tomoaki; Ueda-Arakawa, Naoko; Nishijima, Kazuaki; Uji, Akihito; Horii, Takahiro; Ogino, Ken; Yoshimura, Nagahisa

    2014-03-28

    To integrate parameters on spectral-domain optical coherence tomography (SD-OCT) in diabetic retinopathy (DR) based on the self-organizing map and objectively describe the macular morphologic patterns. A total of 336 consecutive eyes of 216 patients with DR for whom clear SD-OCT images were available were retrospectively reviewed. Eleven OCT parameters and the logarithm of the minimal angle of resolution (logMAR) were measured. These multidimensional data were analyzed based on the self-organizing map on which similar cases were near each other according to the degree of their similarities, followed by the objective clustering. Self-organizing maps indicated that eyes with greater retinal thickness in the central subfield had greater thicknesses in the superior and temporal subfields. Eyes with foveal serous retinal detachment (SRD) had greater thickness in the nasal or inferior subfield. Eyes with foveal cystoid spaces were arranged to the left upper corner on the two-dimensional map; eyes with foveal SRD to the left lower corner; eyes with thickened retinal parenchyma to the lower area. The following objective clustering demonstrated the unsupervised pattern recognition of macular morphologies in diabetic macular edema (DME) as well as the higher-resolution discrimination of DME per se. Multiple regression analyses showed better association of logMAR with retinal thickness in the inferior subfield in eyes with SRD and with external limiting membrane disruption in eyes with foveal cystoid spaces or thickened retinal parenchyma. The self-organizing map facilitates integrative understanding of the macular morphologic patterns and the structural/functional relationship in DR.

  3. Eye coding mechanisms in early human face event-related potentials.

    PubMed

    Rousselet, Guillaume A; Ince, Robin A A; van Rijsbergen, Nicola J; Schyns, Philippe G

    2014-11-10

    In humans, the N170 event-related potential (ERP) is an integrated measure of cortical activity that varies in amplitude and latency across trials. Researchers often conjecture that N170 variations reflect cortical mechanisms of stimulus coding for recognition. Here, to settle the conjecture and understand cortical information processing mechanisms, we unraveled the coding function of N170 latency and amplitude variations in possibly the simplest socially important natural visual task: face detection. On each experimental trial, 16 observers saw face and noise pictures sparsely sampled with small Gaussian apertures. Reverse-correlation methods coupled with information theory revealed that the presence of the eye specifically covaries with behavioral and neural measurements: the left eye strongly modulates reaction times and lateral electrodes represent mainly the presence of the contralateral eye during the rising part of the N170, with maximum sensitivity before the N170 peak. Furthermore, single-trial N170 latencies code more about the presence of the contralateral eye than N170 amplitudes and early latencies are associated with faster reaction times. The absence of these effects in control images that did not contain a face refutes alternative accounts based on retinal biases or allocation of attention to the eye location on the face. We conclude that the rising part of the N170, roughly 120-170 ms post-stimulus, is a critical time-window in human face processing mechanisms, reflecting predominantly, in a face detection task, the encoding of a single feature: the contralateral eye. © 2014 ARVO.

  4. RNA-binding proteins in eye development and disease: implication of conserved RNA granule components.

    PubMed

    Dash, Soma; Siddam, Archana D; Barnum, Carrie E; Janga, Sarath Chandra; Lachke, Salil A

    2016-07-01

    The molecular biology of metazoan eye development is an area of intense investigation. These efforts have led to the surprising recognition that although insect and vertebrate eyes have dramatically different structures, the orthologs or family members of several conserved transcription and signaling regulators such as Pax6, Six3, Prox1, and Bmp4 are commonly required for their development. In contrast, our understanding of posttranscriptional regulation in eye development and disease, particularly regarding the function of RNA-binding proteins (RBPs), is limited. We examine the present knowledge of RBPs in eye development in the insect model Drosophila as well as several vertebrate models such as fish, frog, chicken, and mouse. Interestingly, of the 42 RBPs that have been investigated for their expression or function in vertebrate eye development, 24 (~60%) are recognized in eukaryotic cells as components of RNA granules such as processing bodies, stress granules, or other specialized ribonucleoprotein (RNP) complexes. We discuss the distinct developmental and cellular events that may necessitate potential RBP/RNA granule-associated RNA regulon models to facilitate posttranscriptional control of gene expression in eye morphogenesis. In support of these hypotheses, three RBPs and RNP/RNA granule components Tdrd7, Caprin2, and Stau2 are linked to ocular developmental defects such as congenital cataract, Peters anomaly, and microphthalmia in human patients or animal models. We conclude by discussing the utility of interdisciplinary approaches such as the bioinformatics tool iSyTE (integrated Systems Tool for Eye gene discovery) to prioritize RBPs for deriving posttranscriptional regulatory networks in eye development and disease. WIREs RNA 2016, 7:527-557. doi: 10.1002/wrna.1355 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  5. Analysis of novel Sjogren's syndrome autoantibodies in patients with dry eyes.

    PubMed

    Everett, Sandra; Vishwanath, Sahana; Cavero, Vanessa; Shen, Long; Suresh, Lakshmanan; Malyavantham, Kishore; Lincoff-Cohen, Norah; Ambrus, Julian L

    2017-03-07

    Dry eye is a common problem in Ophthalmology and may occur for many reasons including Sjogren's syndrome (SS). Recent studies have identified autoantibodies, anti-salivary gland protein 1 (SP1), anti-carbonic anhydrase 6 (CA6) and anti-parotid secretory protein (PSP), which occur early in the course of SS. The current studies were designed to evaluate how many patients with idiopathic dry eye and no evidence of systemic diseases from a dry eye practice have these autoantibodies. Patients from a dry eye clinic and normal controls were assessed by Schirmer's test for tear flow. Sera were assessed for autoantibodies using ELISA assays. Statistics was performed with Prism 7 software and student's unpaired t test. In this study 60% of the dry eye patients expressed one of these autoantibodies. Only 30% expressed one of the autoantibodies associated with long-standing SS, which are included in the diagnostic criteria for SS, anti-Ro and anti-La. Patients with disease for less than 2 years and mild dry eyes did not express anti-Ro or anti-La, while 25% expressed anti-SP1. Similar observations, with smaller numbers, were made when patients had not only dry eye but also dry mouth. Antibodies to SP1, CA6 and PSP occur in some patients with idiopathic dry eyes. Further studies will be needed to determine how many of these patients go on to develop systemic manifestations of SS. Testing for these autoantibodies may allow early recognition of patients with SS. This will lead to improved management of the patients and the development of new strategies to maintain normal lacrimal and salivary gland function in patients with SS.

  6. RNA Binding Proteins in Eye Development and Disease: Implication of Conserved RNA Granule Components

    PubMed Central

    Dash, Soma; Siddam, Archana D.; Barnum, Carrie E.; Janga, Sarath Chandra

    2016-01-01

    The molecular biology of metazoan eye development is an area of intense investigation. These efforts have led to the surprising recognition that although insect and vertebrate eyes have dramatically different structures, the orthologs or family members of several conserved transcription and signaling regulators such as Pax6, Six3, Prox1 and Bmp4 are commonly required for their development. In contrast, our understanding of post-transcriptional regulation in eye development and disease, particularly regarding the function of RNA binding proteins (RBPs), is limited. We examine the present knowledge of RBPs in eye development in the insect model Drosophila, as well as several vertebrate models such as fish, frog, chicken and mouse. Interestingly, of the 42 RBPs that have been investigated with for their expression or function in vertebrate eye development, 24 (~60%) are recognized in eukaryotic cells as components of RNA granules such as Processing bodies (P-bodies), Stress granules, or other specialized ribonucleoprotein (RNP) complexes. We discuss the distinct developmental and cellular events that may necessitate potential RBP/RNA granule-associated RNA regulon models to facilitate post-transcriptional control of gene expression in eye morphogenesis. In support of these hypotheses, three RBPs and RNP/RNA granule components Tdrd7, Caprin2 and Stau2 are linked to ocular developmental defects such as congenital cataract, Peters anomaly and microphthalmia in human patients or animal models. We conclude by discussing the utility of interdisciplinary approaches such as the bioinformatics tool iSyTE (integrated Systems Tool for Eye gene discovery) to prioritize RBPs for deriving post-transcriptional regulatory networks in eye development and disease. PMID:27133484

  7. Comprehensive Evaluation of Stand-Off Biometrics Techniques for Enhanced Surveillance during Major Events

    DTIC Science & Technology

    2011-02-01

    transactions. Analysts used  frame  counts to  measure  the duration for which the Test Subject interacted with the iris recognition system camera, from the...44  Figure 20:  Frame  Extracted from HD CCTV Video...the eyes are located and used as a  frame  of reference. Once the eyes are  located, the face image can be rotated clockwise or counter‐clockwise to

  8. Contribution of Spaceflight Environmental Factors to Vision Risks

    NASA Technical Reports Server (NTRS)

    Zanello, Susana

    2012-01-01

    The recognition of a risk of visual impairment and intracranial pressure increase as a result of spaceflight has directed our attention and research efforts to the eye. While the alterations observed in astronauts returning from long duration missions include reportable vision and neuroanatomical changes observed by non-invasive methods, other effects and subsequent tissue responses at the molecular and cellular level can only be studied by accessing the tissue itself. As a result of this need, several studies are currently taking place that use animal models for eye research within the HHC Element. The implementation of these studies represents a significant addition to the capabilities of the biomedical research laboratories within the SK3 branch at JSC.

  9. Study of multi-channel optical system based on the compound eye

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Fu, Yuegang; Liu, Zhiying; Dong, Zhengchao

    2014-09-01

    As an important part of machine vision, compound eye optical systems have the characteristics of high resolution and large FOV. By applying the compound eye optical systems to target detection and recognition, the contradiction between large FOV and high resolution in the traditional single aperture optical systems could be solved effectively and also the parallel processing ability of the optical systems could be sufficiently shown. In this paper, the imaging features of the compound eye optical systems are analyzed. After discussing the relationship between the FOV in each subsystem and the contact ratio of the FOV in the whole system, a method to define the FOV of the subsystem is presented. And a compound eye optical system is designed, which is based on the large FOV synthesized of multi-channels. The compound eye optical system consists with a central optical system and array subsystem, in which the array subsystem is used to capture the target. The high resolution image of the target could be achieved by the central optical system. With the advantage of small volume, light weight and rapid response speed, the optical system could detect the objects which are in 3km and FOV of 60°without any scanning device. The objects in the central field 2w=5.1°could be imaged with high resolution so that the objects could be recognized.

  10. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report

    PubMed Central

    Poth, Christian H.; Schneider, Werner X.

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM. PMID:27713722

  11. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report.

    PubMed

    Poth, Christian H; Schneider, Werner X

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM.

  12. Bologna with Student Eyes 2015: Time to Meet the Expectations from 1999

    ERIC Educational Resources Information Center

    O'Driscoll, Cat; Fröhlich, Melanie; Gehrke, Elisabeth; Isoski, Tijana; O Maolain, Aengus; Meister, Lea; Nordal, Erin; Galan Palomares, Fernando Miguel; Pietkiewicz, Karolina; Sanchez, Ines; Todorovski, Blazhe

    2015-01-01

    Compared to previous years where every aspect of the Bologna process was analysed from a student perspective we have chosen to highlight some key issues for the future that are important for students. Some of the key areas for the the European Students' Union in this edition are student-centred learning, the social dimension, recognition and the…

  13. Avoidance of Emotionally Arousing Stimuli Predicts Social-Perceptual Impairment in Asperger's Syndrome

    ERIC Educational Resources Information Center

    Corden, Ben; Chilvers, Rebecca; Skuse, David

    2008-01-01

    We combined eye-tracking technology with a test of facial affect recognition and a measure of self-reported social anxiety in order to explore the aetiology of social-perceptual deficits in Asperger's syndrome (AS). Compared to controls matched for age, IQ and visual-perceptual ability, we found a group of AS adults was impaired in their…

  14. Auditory Word Recognition of Nouns and Verbs in Children with Specific Language Impairment (SLI)

    ERIC Educational Resources Information Center

    Andreu, Llorenc; Sanz-Torrent, Monica; Guardia-Olmos, Joan

    2012-01-01

    Nouns are fundamentally different from verbs semantically and syntactically, since verbs can specify one, two, or three nominal arguments. In this study, 25 children with Specific Language Impairment (age 5;3-8;2 years) and 50 typically developing children (3;3-8;2 years) participated in an eye-tracking experiment of spoken language comprehension…

  15. Neural Dynamics of Object-Based Multifocal Visual Spatial Attention and Priming: Object Cueing, Useful-Field-of-View, and Crowding

    ERIC Educational Resources Information Center

    Foley, Nicholas C.; Grossberg, Stephen; Mingolla, Ennio

    2012-01-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued…

  16. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    NASA Astrophysics Data System (ADS)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  17. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery.

    PubMed

    Tian, Shu; Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei

    2015-01-01

    The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness.

  18. A VidEo-Based Intelligent Recognition and Decision System for the Phacoemulsification Cataract Surgery

    PubMed Central

    Yin, Xu-Cheng; Wang, Zhi-Bin; Zhou, Fang; Hao, Hong-Wei

    2015-01-01

    The phacoemulsification surgery is one of the most advanced surgeries to treat cataract. However, the conventional surgeries are always with low automatic level of operation and over reliance on the ability of surgeons. Alternatively, one imaginative scene is to use video processing and pattern recognition technologies to automatically detect the cataract grade and intelligently control the release of the ultrasonic energy while operating. Unlike cataract grading in the diagnosis system with static images, complicated background, unexpected noise, and varied information are always introduced in dynamic videos of the surgery. Here we develop a Video-Based Intelligent Recognitionand Decision (VeBIRD) system, which breaks new ground by providing a generic framework for automatically tracking the operation process and classifying the cataract grade in microscope videos of the phacoemulsification cataract surgery. VeBIRD comprises a robust eye (iris) detector with randomized Hough transform to precisely locate the eye in the noise background, an effective probe tracker with Tracking-Learning-Detection to thereafter track the operation probe in the dynamic process, and an intelligent decider with discriminative learning to finally recognize the cataract grade in the complicated video. Experiments with a variety of real microscope videos of phacoemulsification verify VeBIRD's effectiveness. PMID:26693249

  19. EEG Brain Activity in Dynamic Health Qigong Training: Same Effects for Mental Practice and Physical Training?

    PubMed

    Henz, Diana; Schöllhorn, Wolfgang I

    2017-01-01

    In recent years, there has been significant uptake of meditation and related relaxation techniques, as a means of alleviating stress and fostering an attentive mind. Several electroencephalogram (EEG) studies have reported changes in spectral band frequencies during Qigong meditation indicating a relaxed state. Much less is reported on effects of brain activation patterns induced by Qigong techniques involving bodily movement. In this study, we tested whether (1) physical Qigong training alters EEG theta and alpha activation, and (2) mental practice induces the same effect as a physical Qigong training. Subjects performed the dynamic Health Qigong technique Wu Qin Xi (five animals) physically and by mental practice in a within-subjects design. Experimental conditions were randomized. Two 2-min (eyes-open, eyes-closed) EEG sequences under resting conditions were recorded before and immediately after each 15-min exercise. Analyses of variance were performed for spectral power density data. Increased alpha power was found in posterior regions in mental practice and physical training for eyes-open and eyes-closed conditions. Theta power was increased after mental practice in central areas in eyes-open conditions, decreased in fronto-central areas in eyes-closed conditions. Results suggest that mental, as well as physical Qigong training, increases alpha activity and therefore induces a relaxed state of mind. The observed differences in theta activity indicate different attentional processes in physical and mental Qigong training. No difference in theta activity was obtained in physical and mental Qigong training for eyes-open and eyes-closed resting state. In contrast, mental practice of Qigong entails a high degree of internalized attention that correlates with theta activity, and that is dependent on eyes-open and eyes-closed resting state.

  20. Eye closure in darkness animates olfactory and gustatory cortical areas.

    PubMed

    Wiesmann, M; Kopietz, R; Albrecht, J; Linn, J; Reime, U; Kara, E; Pollatos, O; Sakar, V; Anzinger, A; Fesl, G; Brückmann, H; Kobal, G; Stephan, T

    2006-08-01

    In two previous fMRI studies, it was reported that eyes-open and eyes-closed conditions in darkness had differential effects on brain activity, and typical patterns of cortical activity were identified. Without external stimulation, ocular motor and attentional systems were activated when the eyes were open. On the contrary, the visual, somatosensory, vestibular, and auditory systems were activated when the eyes were closed. In this study, we investigated whether cortical areas related to the olfactory and gustatory system are also animated by eye closure without any other external stimulation. In a first fMRI experiment (n = 22), we identified cortical areas including the piriform cortex activated by olfactory stimulation. In a second experiment (n = 12) subjects lying in darkness in the MRI scanner alternately opened and closed their eyes. In accordance to previous studies, we found activation clusters bilaterally in visual, somatosensory, vestibular and auditory cortical areas for the contrast eyes-closed vs. eyes-open. In addition, we were able to show that cortical areas related to the olfactory and gustatory system were also animated by eye closure. These results support the hypothesis that there are two different states of mental activity: with the eyes closed, an "interoceptive" state characterized by imagination and multisensory activity and with the eyes open, an "exteroceptive" state characterized by attention and ocular motor activity. Our study also suggests that the chosen baseline condition may have a considerable impact on activation patterns and on the interpretation of brain activation studies. This needs to be considered for studies of the olfactory and gustatory system.

  1. Heuristics in primary care for recognition of unreported vision loss in older people: a technology development study.

    PubMed

    Wijeyekoon, Skanda; Kharicha, Kalpa; Iliffe, Steve

    2015-09-01

    To evaluate heuristics (rules of thumb) for recognition of undetected vision loss in older patients in primary care. Vision loss is associated with ageing, and its prevalence is increasing. Visual impairment has a broad impact on health, functioning and well-being. Unrecognised vision loss remains common, and screening interventions have yet to reduce its prevalence. An alternative approach is to enhance practitioners' skills in recognising undetected vision loss, by having a more detailed picture of those who are likely not to act on vision changes, report symptoms or have eye tests. This paper describes a qualitative technology development study to evaluate heuristics for recognition of undetected vision loss in older patients in primary care. Using a previous modelling study, two heuristics in the form of mnemonics were developed to aid pattern recognition and allow general practitioners to identify potential cases of unreported vision loss. These heuristics were then analysed with experts. Findings It was concluded that their implementation in modern general practice was unsuitable and an alternative solution should be sort.

  2. Research on improving image recognition robustness by combining multiple features with associative memory

    NASA Astrophysics Data System (ADS)

    Guo, Dongwei; Wang, Zhe

    2018-05-01

    Convolutional neural networks (CNN) achieve great success in computer vision, it can learn hierarchical representation from raw pixels and has outstanding performance in various image recognition tasks [1]. However, CNN is easy to be fraudulent in terms of it is possible to produce images totally unrecognizable to human eyes that CNNs believe with near certainty are familiar objects. [2]. In this paper, an associative memory model based on multiple features is proposed. Within this model, feature extraction and classification are carried out by CNN, T-SNE and exponential bidirectional associative memory neural network (EBAM). The geometric features extracted from CNN and the digital features extracted from T-SNE are associated by EBAM. Thus we ensure the recognition of robustness by a comprehensive assessment of the two features. In our model, we can get only 8% error rate with fraudulent data. In systems that require a high safety factor or some key areas, strong robustness is extremely important, if we can ensure the image recognition robustness, network security will be greatly improved and the social production efficiency will be extremely enhanced.

  3. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  4. The Apple of the mind's eye: Everyday attention, metamemory, and reconstructive memory for the Apple logo.

    PubMed

    Blake, Adam B; Nazarian, Meenely; Castel, Alan D

    2015-01-01

    People are regularly bombarded with logos in an attempt to improve brand recognition, and logos are often designed with the central purpose of memorability. The ubiquitous Apple logo is a simple design and is often referred to as one of the most recognizable logos in the world. The present study examined recall and recognition for this simple and pervasive logo and to what degree metamemory (confidence judgements) match memory performance. Participants showed surprisingly poor memory for the details of the logo as measured through recall (drawings) and forced-choice recognition. Only 1 participant out of 85 correctly recalled the Apple logo, and fewer than half of all participants correctly identified the logo. Importantly, participants indicated higher levels of confidence for both recall and recognition, and this overconfidence was reduced if participants made the judgements after, rather than before, drawing the logo. The general findings did not differ between Apple and PC users. The results provide novel support for theories of attentional saturation, inattentional amnesia, and reconstructive memory; additionally they show how an availability heuristic can lead to overconfidence in memory for logos.

  5. The Memory State Heuristic: A Formal Model Based on Repeated Recognition Judgments

    ERIC Educational Resources Information Center

    Castela, Marta; Erdfelder, Edgar

    2017-01-01

    The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e.,…

  6. Changes in Reported Sexual Orientation Following US States Recognition of Same-Sex Couples

    PubMed Central

    Corliss, Heather L.; Spiegelman, Donna; Williams, Kerry; Austin, S. Bryn

    2016-01-01

    Objectives. To compare changes in self-reported sexual orientation of women living in states with any recognition of same-sex relationships (e.g., hospital visitation, domestic partnerships) with those of women living in states without such recognition. Methods. We calculated the likelihood of women in the Nurses’ Health Study II (n = 69 790) changing their reported sexual orientation between 1995 and 2009. Results. We used data from the Nurses’ Health Study II and found that living in a state with same-sex relationship recognition was associated with changing one’s reported sexual orientation, particularly from heterosexual to sexual minority. Individuals who reported being heterosexual in 1995 were 30% more likely to report a minority orientation (i.e., bisexual or lesbian) in 2009 (risk ratio = 1.30; 95% confidence interval = 1.05, 1.61) if they lived in a state with any recognition of same-sex relationships compared with those who lived in a state without such recognition. Conclusions. Policies recognizing same-sex relationships may encourage women to report a sexual minority orientation. Future research is needed to clarify how other social and legal policies may affect sexual orientation self-reports. PMID:27736213

  7. 29 CFR 29.13 - Recognition of State Apprenticeship Agencies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... its authority to grant recognition to a State Apprenticeship Agency. Recognition confers non-exclusive... carry out the functions of a Registration Agency, including: Outreach and education; registration of... the areas of non-conformity, require corrective action, and offer technical assistance. After the...

  8. On the other side of the fence: effects of social categorization and spatial grouping on memory and attention for own-race and other-race faces.

    PubMed

    Kloth, Nadine; Shields, Susannah E; Rhodes, Gillian

    2014-01-01

    The term "own-race bias" refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias.

  9. On the Other Side of the Fence: Effects of Social Categorization and Spatial Grouping on Memory and Attention for Own-Race and Other-Race Faces

    PubMed Central

    Kloth, Nadine; Shields, Susannah E.; Rhodes, Gillian

    2014-01-01

    The term “own-race bias” refers to the phenomenon that humans are typically better at recognizing faces from their own than a different race. The perceptual expertise account assumes that our face perception system has adapted to the faces we are typically exposed to, equipping it poorly for the processing of other-race faces. Sociocognitive theories assume that other-race faces are initially categorized as out-group, decreasing motivation to individuate them. Supporting sociocognitive accounts, a recent study has reported improved recognition for other-race faces when these were categorized as belonging to the participants' in-group on a second social dimension, i.e., their university affiliation. Faces were studied in groups, containing both own-race and other-race faces, half of each labeled as in-group and out-group, respectively. When study faces were spatially grouped by race, participants showed a clear own-race bias. When faces were grouped by university affiliation, recognition of other-race faces from the social in-group was indistinguishable from own-race face recognition. The present study aimed at extending this singular finding to other races of faces and participants. Forty Asian and 40 European Australian participants studied Asian and European faces for a recognition test. Faces were presented in groups, containing an equal number of own-university and other-university Asian and European faces. Between participants, faces were grouped either according to race or university affiliation. Eye tracking was used to study the distribution of spatial attention to individual faces in the display. The race of the study faces significantly affected participants' memory, with better recognition of own-race than other-race faces. However, memory was unaffected by the university affiliation of the faces and by the criterion for their spatial grouping on the display. Eye tracking revealed strong looking biases towards both own-race and own-university faces. Results are discussed in light of the theoretical accounts of the own-race bias. PMID:25180902

  10. Eye-tracking novice and expert geologist groups in the field and laboratory

    NASA Astrophysics Data System (ADS)

    Cottrell, R. D.; Evans, K. M.; Jacobs, R. A.; May, B. B.; Pelz, J. B.; Rosen, M. R.; Tarduno, J. A.; Voronov, J.

    2010-12-01

    We are using an Active Vision approach to learn how novices and expert geologists acquire visual information in the field. The Active Vision approach emphasizes that visual perception is an active process wherein new information is acquired about a particular environment through exploratory eye movements. Eye movements are not only influenced by physical stimuli, but are also strongly influenced by high-level perceptual and cognitive processes. Eye-tracking data were collected on ten novices (undergraduate geology students) and 3 experts during a 10-day field trip across California focused on neotectonics. In addition, high-resolution panoramic images were captured at each key locality for use in a semi-immersive laboratory environment. Examples of each data type will be presented. The number of observers will be increased in subsequent field trips, but expert/novice differences are already apparent in the first set of individual eye-tracking records, including gaze time, gaze pattern and object recognition. We will review efforts to quantify these patterns, and development of semi-immersive environments to display geologic scenes. The research is a collaborative effort between Earth scientists, Cognitive scientists and Imaging scientists at the University of Rochester and the Rochester Institute of Technology and with funding from the National Science Foundation.

  11. [The research advances and applications of genome editing in hereditary eye diseases].

    PubMed

    Cai, S W; Zhang, Y; Hou, M Z; Liu, Y; Li, X R

    2017-05-11

    Genome editing is a cutting-edge technology that generates DNA double strand breaks at the specific genomic DNA sequence through nuclease recognition and cleavage, and then achieves insertion, replacement, or deletion of the target gene via endogenous DNA repair mechanisms, such as non-homologous end joining, homology directed repair, and homologous recombination. So far, more than 600 human hereditary eye diseases and systemic hereditary diseases with ocular phenotypes have been found. However, most of these diseases are of incompletely elucidated pathogenesis and without effective therapies. Genome editing technology can precisely target and alter the genomes of animals, establish animal models of the hereditary diseases, and elucidate the relationship between the target gene and the disease phenotype, thereby providing a powerful approach to studying the pathogenic mechanisms underlying the hereditary eye diseases. In addition, correction of gene mutations by the genome editing brings a new hope to gene therapy for the hereditary eye diseases. This review introduces the molecular characteristics of 4 major enzymes used in the genome editing, including homing endonucleases, zinc finger nucleases, transcription activator-like effector nucleases, and clustered regularly interspaced short palindromic repeats (CRISPR)/ CRISPR-associated protein 9 (Cas9), and summarizes the current applications of this technology in investigating the pathogenic mechanisms underlying the hereditary eye diseases. (Chin J Ophthalmol, 2017, 53: 386-371 ) .

  12. Refractive states of eyes and associations between ametropia and age, breed, and axial globe length in domestic cats.

    PubMed

    Konrade, Kricket A; Hoffman, Allison R; Ramey, Kelli L; Goldenberg, Ruby B; Lehenbauer, Terry W

    2012-02-01

    To determine the refractive states of eyes in domestic cats and to evaluate correlations between refractive error and age, breed, and axial globe measurements. 98 healthy ophthalmologically normal domestic cats. The refractive state of 196 eyes (2 eyes/cat) was determined by use of streak retinoscopy. Cats were considered ametropic when the mean refractive state was ≥ ± 0.5 diopter (D). Amplitude-mode ultrasonography was used to determine axial globe length, anterior chamber length, and vitreous chamber depth. Mean ± SD refractive state of all eyes was -0.78 ± 1.37 D. Mean refractive error of cats changed significantly as a function of age. Mean refractive state of kittens (≤ 4 months old) was -2.45 ± 1.57 D, and mean refractive state of adult cats (> 1 year old) was -0.39 ± 0.85 D. Mean axial globe length, anterior chamber length, and vitreous chamber depth were 19.75 ± 1.59 mm, 4.66 ± 0.86 mm, and 7.92 ± 0.86 mm, respectively. Correlations were detected between age and breed and between age and refractive states of feline eyes. Mean refractive error changed significantly as a function of age, and kittens had greater negative refractive error than did adult cats. Domestic shorthair cats were significantly more likely to be myopic than were domestic mediumhair or domestic longhair cats. Domestic cats should be included in the animals in which myopia can be detected at a young age, with a likelihood of progression to emmetropia as cats mature.

  13. The Split Fovea Theory and the Leicester critique: what do the data say?

    PubMed

    Van der Haegen, Lise; Drieghe, Denis; Brysbaert, Marc

    2010-01-01

    According to the Split Fovea Theory (SFT) recognition of foveally presented words involves interhemispheric transfer. This is because letters to the left of the fixation location are initially sent to the right hemisphere, whereas letters to the right of the fixation position are projected to the left hemisphere. Both sources of information must be integrated for words to be recognized. Evidence for the SFT comes from the Optimal Viewing Position (OVP) paradigm, in which foveal word recognition is examined as a function of the letter fixated. OVP curves are different for left and right language dominant participants, indicating a time cost when information is presented in the half-field ipsilateral to the dominant hemisphere (Hunter, Brysbaert, & Knecht, 2007). The methodology of the SFT research has recently been questioned, because not enough efforts were made to ensure adequate fixation. The aim of the present study is to test the validity of this argument. Experiment 1 replicated the OVP effect in a naming task by presenting words at different fixation positions, with the experimental settings applied in previous OVP research. Experiment 2 monitored and controlled eye fixations of the participants and presented the stimuli within the boundaries of the fovea. Exactly the same OVP curve was obtained. In Experiment 3, the eyes were also tracked and monocular viewing was used. Results again revealed the same OVP effect, although latencies were remarkably higher than in the previous experiments. From these results we can conclude that although noise is present in classical SFT studies without eye-tracking, this does not change the OVP effect observed with left dominant individuals.

  14. Making a difference with Vision 2020: The Right to Sight? Lessons from two states of North Western Nigeria.

    PubMed

    Muhammad, N; Adamu, M D

    2014-01-01

    Settings and Aim: The World Health Organization launched in 1999 an initiative to eliminate the global avoidable blindness and prevent the projected doubling of avoidable visual impairment between 1990 and 2020 (Vision 2020: The Right to Sight). The World Health Assembly (WHA) adopted resolutions WHA 59.25, WHA 56.26 urging member states to adopt the Vision 2020 principles. More than 90 nongovernmental development organizations, agencies, and institutions, together with a number of major corporations, are now working together in this global partnership. Two neighboring states in North Western Nigeria provide eye care services using different approaches; one state uses the principles of Vision 2020, the other uses a different strategy. The aim of the study was to assess awareness and utilization of eye care services in two Nigerian states. A population-based cross-sectional interview of households was conducted in two neighboring states using a structured questionnaire. Data analysis was performed using SPSS version 21 and a P < 0.05 was considered as significant. Participation rate was 97% in the two states. The population in the Vision 2020-compliant state were significantly more aware about general eye care services (80% vs. 44%, P < 0.0005); had less proportion of households unaware of any eye care service (55% vs. 69%, P < 0.0005); and have a significantly higher felt the need to utilize eye care services (47% vs. 5.9%, P < 0.0005). The service utilization rate was however low in the two states. The principles of Vision 2020: The Right to Sight is adaptable to different cultures/societies and has demonstrated a potential to increase awareness and a felt need for eye care in poor resource settings.

  15. Hybrid generative-discriminative approach to age-invariant face recognition

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad; Shafique, Tamoor

    2018-03-01

    Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.

  16. Noisy Ocular Recognition Based on Three Convolutional Neural Networks

    PubMed Central

    Lee, Min Beom; Hong, Hyung Gil; Park, Kang Ryoung

    2017-01-01

    In recent years, the iris recognition system has been gaining increasing acceptance for applications such as access control and smartphone security. When the images of the iris are obtained under unconstrained conditions, an issue of undermined quality is caused by optical and motion blur, off-angle view (the user’s eyes looking somewhere else, not into the front of the camera), specular reflection (SR) and other factors. Such noisy iris images increase intra-individual variations and, as a result, reduce the accuracy of iris recognition. A typical iris recognition system requires a near-infrared (NIR) illuminator along with an NIR camera, which are larger and more expensive than fingerprint recognition equipment. Hence, many studies have proposed methods of using iris images captured by a visible light camera without the need for an additional illuminator. In this research, we propose a new recognition method for noisy iris and ocular images by using one iris and two periocular regions, based on three convolutional neural networks (CNNs). Experiments were conducted by using the noisy iris challenge evaluation-part II (NICE.II) training dataset (selected from the university of Beira iris (UBIRIS).v2 database), mobile iris challenge evaluation (MICHE) database, and institute of automation of Chinese academy of sciences (CASIA)-Iris-Distance database. As a result, the method proposed by this study outperformed previous methods. PMID:29258217

  17. From the Teacher's Eyes: Facilitating Teachers Noticings on Informal Formative Assessments (IFAS) and Exploring the Challenges to Effective Implementation

    ERIC Educational Resources Information Center

    Sezen-Barrie, Asli; Kelly, Gregory J.

    2017-01-01

    This study focuses on teachers' use of informal formative assessments (IFAs) aimed at improving students' learning and teachers' recognition of students' learning processes. The study was designed as an explorative case study of four middle school teachers and their students at a charter school in the northeastern U.S.A. The data collected for the…

  18. Independent Influences of Verbalization and Race on the Configural and Featural Processing of Faces: A Behavioral and Eye Movement Study

    ERIC Educational Resources Information Center

    Nakabayashi, Kazuyo; Lloyd-Jones, Toby J.; Butcher, Natalie; Liu, Chang Hong

    2012-01-01

    Describing a face in words can either hinder or help subsequent face recognition. Here, the authors examined the relationship between the benefit from verbally describing a series of faces and the same-race advantage (SRA) whereby people are better at recognizing unfamiliar faces from their own race as compared with those from other races.…

  19. Speech Acquisition and Automatic Speech Recognition for Integrated Spacesuit Audio Systems

    NASA Technical Reports Server (NTRS)

    Huang, Yiteng; Chen, Jingdong; Chen, Shaoyan

    2010-01-01

    A voice-command human-machine interface system has been developed for spacesuit extravehicular activity (EVA) missions. A multichannel acoustic signal processing method has been created for distant speech acquisition in noisy and reverberant environments. This technology reduces noise by exploiting differences in the statistical nature of signal (i.e., speech) and noise that exists in the spatial and temporal domains. As a result, the automatic speech recognition (ASR) accuracy can be improved to the level at which crewmembers would find the speech interface useful. The developed speech human/machine interface will enable both crewmember usability and operational efficiency. It can enjoy a fast rate of data/text entry, small overall size, and can be lightweight. In addition, this design will free the hands and eyes of a suited crewmember. The system components and steps include beam forming/multi-channel noise reduction, single-channel noise reduction, speech feature extraction, feature transformation and normalization, feature compression, model adaption, ASR HMM (Hidden Markov Model) training, and ASR decoding. A state-of-the-art phoneme recognizer can obtain an accuracy rate of 65 percent when the training and testing data are free of noise. When it is used in spacesuits, the rate drops to about 33 percent. With the developed microphone array speech-processing technologies, the performance is improved and the phoneme recognition accuracy rate rises to 44 percent. The recognizer can be further improved by combining the microphone array and HMM model adaptation techniques and using speech samples collected from inside spacesuits. In addition, arithmetic complexity models for the major HMMbased ASR components were developed. They can help real-time ASR system designers select proper tasks when in the face of constraints in computational resources.

  20. Eye Movement Indices in the Study of Depressive Disorder

    PubMed Central

    LI, Yu; XU, Yangyang; XIA, Mengqing; ZHANG, Tianhong; WANG, Junjie; LIU, Xu; HE, Yongguang; WANG, Jijun

    2016-01-01

    Background Impaired cognition is one of the most common core symptoms of depressive disorder. Eye movement testing mainly reflects patients’ cognitive functions, such as cognition, memory, attention, recognition, and recall. This type of testing has great potential to improve theories related to cognitive functioning in depressive episodes as well as potential in its clinical application. Aims This study investigated whether eye movement indices of patients with unmedicated depressive disorder were abnormal or not, as well as the relationship between these indices and mental symptoms. Methods Sixty patients with depressive disorder and sixty healthy controls (who were matched by gender, age and years of education) were recruited, and completed eye movement tests including three tasks: fixation task, saccade task and free-view task. The EyeLink desktop eye tracking system was employed to collect eye movement information, and analyze the eye movement indices of the three tasks between the two groups. Results (1) In the fixation task, compared to healthy controls, patients with depressive disorder showed more fixations, shorter fixation durations, more saccades and longer saccadic lengths; (2) In the saccade task, patients with depressive disorder showed longer anti-saccade latencies and smaller anti-saccade peak velocities; (3) In the free-view task, patients with depressive disorder showed fewer saccades and longer mean fixation durations; (4) Correlation analysis showed that there was a negative correlation between the pro-saccade amplitude and anxiety symptoms, and a positive correlation between the anti-saccade latency and anxiety symptoms. The depression symptoms were negatively correlated with fixation times, saccades, and saccadic paths respectively in the free-view task; while the mean fixation duration and depression symptoms showed a positive correlation. Conclusion Compared to healthy controls, patients with depressive disorder showed significantly abnormal eye movement indices. In addition patients’ anxiety and depression symptoms and eye movement indices were correlated. The pathological meaning of these phenomena deserve further exploration. PMID:28638208

  1. Eye Movement Indices in the Study of Depressive Disorder.

    PubMed

    Li, Yu; Xu, Yangyang; Xia, Mengqing; Zhang, Tianhong; Wang, Junjie; Liu, Xu; He, Yongguang; Wang, Jijun

    2016-12-25

    Impaired cognition is one of the most common core symptoms of depressive disorder. Eye movement testing mainly reflects patients' cognitive functions, such as cognition, memory, attention, recognition, and recall. This type of testing has great potential to improve theories related to cognitive functioning in depressive episodes as well as potential in its clinical application. This study investigated whether eye movement indices of patients with unmedicated depressive disorder were abnormal or not, as well as the relationship between these indices and mental symptoms. Sixty patients with depressive disorder and sixty healthy controls (who were matched by gender, age and years of education) were recruited, and completed eye movement tests including three tasks: fixation task, saccade task and free-view task. The EyeLink desktop eye tracking system was employed to collect eye movement information, and analyze the eye movement indices of the three tasks between the two groups. (1) In the fixation task, compared to healthy controls, patients with depressive disorder showed more fixations, shorter fixation durations, more saccades and longer saccadic lengths; (2) In the saccade task, patients with depressive disorder showed longer anti-saccade latencies and smaller anti-saccade peak velocities; (3) In the free-view task, patients with depressive disorder showed fewer saccades and longer mean fixation durations; (4) Correlation analysis showed that there was a negative correlation between the pro-saccade amplitude and anxiety symptoms, and a positive correlation between the anti-saccade latency and anxiety symptoms. The depression symptoms were negatively correlated with fixation times, saccades, and saccadic paths respectively in the free-view task; while the mean fixation duration and depression symptoms showed a positive correlation. Compared to healthy controls, patients with depressive disorder showed significantly abnormal eye movement indices. In addition patients' anxiety and depression symptoms and eye movement indices were correlated. The pathological meaning of these phenomena deserve further exploration.

  2. Food-Induced Emotional Resonance Improves Emotion Recognition.

    PubMed

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.

  3. Food-Induced Emotional Resonance Improves Emotion Recognition

    PubMed Central

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce—which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one. PMID:27973559

  4. The "hypnotic state" and eye movements: Less there than meets the eye?

    PubMed Central

    Nordhjem, Barbara; Marcusson-Clavertz, David; Holmqvist, Kenneth

    2017-01-01

    Responsiveness to hypnotic procedures has been related to unusual eye behaviors for centuries. Kallio and collaborators claimed recently that they had found a reliable index for "the hypnotic state" through eye-tracking methods. Whether or not hypnotic responding involves a special state of consciousness has been part of a contentious debate in the field, so the potential validity of their claim would constitute a landmark. However, their conclusion was based on 1 highly hypnotizable individual compared with 14 controls who were not measured on hypnotizability. We sought to replicate their results with a sample screened for High (n = 16) or Low (n = 13) hypnotizability. We used a factorial 2 (high vs. low hypnotizability) x 2 (hypnosis vs. resting conditions) counterbalanced order design with these eye-tracking tasks: Fixation, Saccade, Optokinetic nystagmus (OKN), Smooth pursuit, and Antisaccade (the first three tasks has been used in Kallio et al.'s experiment). Highs reported being more deeply in hypnosis than Lows but only in the hypnotic condition, as expected. There were no significant main or interaction effects for the Fixation, OKN, or Smooth pursuit tasks. For the Saccade task both Highs and Lows had smaller saccades during hypnosis, and in the Antisaccade task both groups had slower Antisaccades during hypnosis. Although a couple of results suggest that a hypnotic condition may produce reduced eye motility, the lack of significant interactions (e.g., showing only Highs expressing a particular eye behavior during hypnosis) does not support the claim that eye behaviors (at least as measured with the techniques used) are an indicator of a "hypnotic state.” Our results do not preclude the possibility that in a more spontaneous or different setting the experience of being hypnotized might relate to specific eye behaviors. PMID:28846696

  5. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors

    PubMed Central

    Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung

    2015-01-01

    Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands. PMID:26184214

  6. Nonintrusive Finger-Vein Recognition System Using NIR Image Sensor and Accuracy Analyses According to Various Factors.

    PubMed

    Pham, Tuyen Danh; Park, Young Ho; Nguyen, Dat Tien; Kwon, Seung Yong; Park, Kang Ryoung

    2015-07-13

    Biometrics is a technology that enables an individual person to be identified based on human physiological and behavioral characteristics. Among biometrics technologies, face recognition has been widely used because of its advantages in terms of convenience and non-contact operation. However, its performance is affected by factors such as variation in the illumination, facial expression, and head pose. Therefore, fingerprint and iris recognitions are preferred alternatives. However, the performance of the former can be adversely affected by the skin condition, including scarring and dryness. In addition, the latter has the disadvantages of high cost, large system size, and inconvenience to the user, who has to align their eyes with the iris camera. In an attempt to overcome these problems, finger-vein recognition has been vigorously researched, but an analysis of its accuracies according to various factors has not received much attention. Therefore, we propose a nonintrusive finger-vein recognition system using a near infrared (NIR) image sensor and analyze its accuracies considering various factors. The experimental results obtained with three databases showed that our system can be operated in real applications with high accuracy; and the dissimilarity of the finger-veins of different people is larger than that of the finger types and hands.

  7. Darwin revisited: The vagus nerve is a causal element in controlling recognition of other's emotions.

    PubMed

    Colzato, Lorenza S; Sellaro, Roberta; Beste, Christian

    2017-07-01

    Charles Darwin proposed that via the vagus nerve, the tenth cranial nerve, emotional facial expressions are evolved, adaptive and serve a crucial communicative function. In line with this idea, the later-developed polyvagal theory assumes that the vagus nerve is the key phylogenetic substrate that regulates emotional and social behavior. The polyvagal theory assumes that optimal social interaction, which includes the recognition of emotion in faces, is modulated by the vagus nerve. So far, in humans, it has not yet been demonstrated that the vagus plays a causal role in emotion recognition. To investigate this we employed transcutaneous vagus nerve stimulation (tVNS), a novel non-invasive brain stimulation technique that modulates brain activity via bottom-up mechanisms. A sham/placebo-controlled, randomized cross-over within-subjects design was used to infer a causal relation between the stimulated vagus nerve and the related ability to recognize emotions as indexed by the Reading the Mind in the Eyes Test in 38 healthy young volunteers. Active tVNS, compared to sham stimulation, enhanced emotion recognition for easy items, suggesting that it promoted the ability to decode salient social cues. Our results confirm that the vagus nerve is causally involved in emotion recognition, supporting Darwin's argumentation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. CACNA1C risk variant affects facial emotion recognition in healthy individuals.

    PubMed

    Nieratschker, Vanessa; Brückmann, Christof; Plewnia, Christian

    2015-11-27

    Recognition and correct interpretation of facial emotion is essential for social interaction and communication. Previous studies have shown that impairments in this cognitive domain are common features of several psychiatric disorders. Recent association studies identified CACNA1C as one of the most promising genetic risk factors for psychiatric disorders and previous evidence suggests that the most replicated risk variant in CACNA1C (rs1006737) is affecting emotion recognition and processing. However, studies investigating the influence of rs1006737 on this intermediate phenotype in healthy subjects at the behavioral level are largely missing to date. Here, we applied the "Reading the Mind in the Eyes" test, a facial emotion recognition paradigm in a cohort of 92 healthy individuals to address this question. Whereas accuracy was not affected by genotype, CACNA1C rs1006737 risk-allele carries (AA/AG) showed significantly slower mean response times compared to individuals homozygous for the G-allele, indicating that healthy risk-allele carriers require more information to correctly identify a facial emotion. Our study is the first to provide evidence for an impairing behavioral effect of the CACNA1C risk variant rs1006737 on facial emotion recognition in healthy individuals and adds to the growing number of studies pointing towards CACNA1C as affecting intermediate phenotypes of psychiatric disorders.

  9. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    PubMed

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  10. Feature saliency in judging the sex and familiarity of faces.

    PubMed

    Roberts, T; Bruce, V

    1988-01-01

    Two experiments are reported on the effect of feature masking on judgements of the sex and familiarity of faces. In experiment 1 the effect of masking the eyes, nose, or mouth of famous and nonfamous, male and female faces on response times in two tasks was investigated. In the first, recognition, task only masking of the eyes had a significant effect on response times. In the second, sex-judgement, task masking of the nose gave rise to a significant and large increase in response times. In experiment 2 it was found that when facial features were presented in isolation in a sex-judgement task, responses to noses were at chance level, unlike those for eyes or mouths. It appears that visual information available from the nose in isolation from the rest of the face is not sufficient for sex judgement, yet masking of the nose may disrupt the extraction of information about the overall topography of the face, information that may be more useful for sex judgement than for identification of a face.

  11. Eye Development Genes and Known Syndromes

    PubMed Central

    Slavotinek, Anne M.

    2011-01-01

    Anophthalmia and microphthalmia (A/M) are significant eye defects because they can have profound effects on visual acuity. A/M is associated with non-ocular abnormalities in an estimated 33–95% of cases and around 25% of patients have an underlying genetic syndrome that is diagnosable. Syndrome recognition is important for targeted molecular genetic testing, prognosis and for counseling regarding recurrence risks. This review provides clinical and molecular information for several of the commonest syndromes associated with A/M: Anophthalmia-Esophageal-Genital syndrome, caused by SOX2 mutations, Anophthalmia and pituitary abnormalities caused by OTX2 mutations, Matthew-Wood syndrome caused by STRA6 mutations, Oculocardiafaciodental syndrome and Lenz microphthalmia caused by BCOR mutations, Microphthalmia Linear Skin pigmentation syndrome caused by HCCS mutations, Anophthalmia, pituitary abnormalities, polysyndactyly caused by BMP4 mutations and Waardenburg anophthalmia caused by mutations in SMOC1. In addition, we briefly discuss the ocular and extraocular phenotypes associated with several other important eye developmental genes, including GDF6, VSX2, RAX, SHH, SIX6 and PAX6. PMID:22005280

  12. The Effectiveness of Gaze-Contingent Control in Computer Games.

    PubMed

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  13. Improving information recognition and performance of recycling chimneys.

    PubMed

    Durugbo, Christopher

    2013-01-01

    The aim of this study was to assess and improve how recyclers (individuals carrying out the task of recycling) make use of visual cues to carryout recycling tasks in relation to 'recycling chimneys' (repositories for recycled waste). An initial task analysis was conducted through an activity sampling study and an eye tracking experiment using a mobile eye tracker to capture fixations of recyclers during recycling tasks. Following data collection using the eye tracker, a set of recommendations for improving information representation were then identified using the widely researched skills, rules, knowledge framework, and for a comparative study to assess the performance of improved interfaces for recycling chimneys based on Ecological Interface Design principles. Information representation on recycling chimneys determines how we recycle waste. This study describes an eco-ergonomics-based approach to improve the design of interfaces for recycling chimneys. The results are valuable for improving the performance of waste collection processes in terms of minimising contamination and increasing the quantity of recyclables.

  14. Oxytocin increases attention to the eyes and selectively enhances self-reported affective empathy for fear.

    PubMed

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-11-01

    Oxytocin (OXT) has previously been implicated in a range of prosocial behaviors such as trust and emotion recognition. Nevertheless, recent studies have questioned the evidence for this link. In addition, there has been relatively little conclusive research on the effect of OXT on empathic ability and such studies as there are have not examined the mechanisms through which OXT might affect empathy, or whether OXT selectively facilitates empathy for specific emotions. In the current study, we used eye-tracking to assess attention to socially relevant information while participants viewed dynamic, empathy-inducing video clips, in which protagonists expressed sadness, happiness, pain or fear. In a double-blind, within-subjects, randomized control trial, 40 healthy male participants received 24 IU intranasal OXT or placebo in two identical experimental sessions, separated by a 2-week interval. OXT led to an increase in time spent fixating upon the eye-region of the protagonist's face across emotions. OXT also selectively enhanced self-reported affective empathy for fear, but did not affect cognitive or affective empathy for other emotions. Nevertheless, there was no positive relationship between eye-gaze patterns and affective empathy, suggesting that although OXT influences eye-gaze and may enhance affective empathy for fear, these two systems are independent. Future studies need to further examine the effect of OXT on eye-gaze to fully ascertain whether this can explain the improvements in emotional behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. The role of optometrists in India: An integral part of an eye health team

    PubMed Central

    De Souza, Neilsen; Cui, Yu; Looi, Stephanie; Paudel, Prakash; Shinde, Lakshmi; Kumar, Krishna; Berwal, Rajbir; Wadhwa, Rajesh; Daniel, Vinod; Flanagan, Judith; Holden, Brien

    2012-01-01

    India has a proud tradition of blindness prevention, being the first country in the world to implement a blindness control programme which focused on a model to address blinding eye disease. However, with 133 million people blind or vision impaired due to the lack of an eye examination and provision of an appropriate pair of spectacles, it is imperative to establish a cadre of eye care professionals to work in conjunction with ophthalmologists to deliver comprehensive eye care. The integration of highly educated four year trained optometrists into primary health services is a practical means of correcting refractive error and detecting ocular disease, enabling co-managed care between ophthalmologists and optometrists. At present, the training of optometrists varies from two year trained ophthalmic assistants/optometrists or refractionists to four year degree trained optometrists. The profession of optometry in India is not regulated, integrated into the health care system or recognised by the majority of people in India as provider of comprehensive eye care services. In the last two years, the profession of optometry in India is beginning to take the necessary steps to gain recognition and regulation to become an independent primary health care profession. The formation of the Indian Optometry Federation as the single peak body of optometry in India and the soon to be established Optometry Council of India are key organisations working towards the development and regulation of optometry. PMID:22944749

  16. Paroxysmal eye–head movements in Glut1 deficiency syndrome

    PubMed Central

    Engelstad, Kristin; Kane, Steven A.; Goldberg, Michael E.; De Vivo, Darryl C.

    2017-01-01

    Objective: To describe a characteristic paroxysmal eye–head movement disorder that occurs in infants with Glut1 deficiency syndrome (Glut1 DS). Methods: We retrospectively reviewed the medical charts of 101 patients with Glut1 DS to obtain clinical data about episodic abnormal eye movements and analyzed video recordings of 18 eye movement episodes from 10 patients. Results: A documented history of paroxysmal abnormal eye movements was found in 32/101 patients (32%), and a detailed description was available in 18 patients, presented here. Episodes started before age 6 months in 15/18 patients (83%), and preceded the onset of seizures in 10/16 patients (63%) who experienced both types of episodes. Eye movement episodes resolved, with or without treatment, by 6 years of age in 7/8 patients with documented long-term course. Episodes were brief (usually <5 minutes). Video analysis revealed that the eye movements were rapid, multidirectional, and often accompanied by a head movement in the same direction. Eye movements were separated by clear intervals of fixation, usually ranging from 200 to 800 ms. The movements were consistent with eye–head gaze saccades. These movements can be distinguished from opsoclonus by the presence of a clear intermovement fixation interval and the association of a same-direction head movement. Conclusions: Paroxysmal eye–head movements, for which we suggest the term aberrant gaze saccades, are an early symptom of Glut1 DS in infancy. Recognition of the episodes will facilitate prompt diagnosis of this treatable neurodevelopmental disorder. PMID:28341645

  17. Low-grade inflammation decreases emotion recognition - Evidence from the vaccination model of inflammation.

    PubMed

    Balter, Leonie J T; Hulsken, Sasha; Aldred, Sarah; Drayson, Mark T; Higgs, Suzanne; Veldhuijzen van Zanten, Jet J C S; Raymond, Jane E; Bosch, Jos A

    2018-05-06

    The ability to adequately interpret the mental state of another person is key to complex human social interaction. Recent evidence suggests that this ability, considered a hallmark of 'theory of mind' (ToM), becomes impaired by inflammation. However, extant supportive empirical evidence is based on experiments that induce not only inflammation but also induce discomfort and sickness, factors that could also account for temporary social impairment. Hence, an experimental inflammation manipulation was applied that avoided this confound, isolating effects of inflammation and social interaction. Forty healthy male participants (mean age = 25, SD = 5 years) participated in this double-blind placebo-controlled crossover trial. Inflammation was induced using Salmonella Typhi vaccination (0.025 mg; Typhim Vi, Sanofi Pasteur, UK); saline-injection was used as a control. About 6 h 30 m after injection in each condition, participants completed the Reading the Mind in the Eyes Test (RMET), a validated test for assessing how well the mental states of others can be inferred through observation of the eyes region of the face. Vaccination induced systemic inflammation, elevating IL-6 by +419% (p < .001), without fever, sickness symptoms (e.g., nausea, light-headedness), or mood changes (all p's > .21). Importantly, compared to placebo, vaccination significantly reduced RMET accuracy (p < .05). RMET stimuli selected on valence (positive, negative, neutral) provided no evidence of a selective impact of treatment. By utilizing an inflammation-induction procedure that avoided concurrent sicknesses or symptoms in a double-blinded design, the present study provides further support for the hypothesis that immune activation impairs ToM. Such impairment may provide a mechanistic link explaining social-cognitive deficits in psychopathologies that exhibit low-grade inflammation, such as major depression. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  19. Facial Expressions and Ability to Recognize Emotions From Eyes or Mouth in Children

    PubMed Central

    Guarnera, Maria; Hichy, Zira; Cascio, Maura I.; Carrubba, Stefano

    2015-01-01

    This research aims to contribute to the literature on the ability to recognize anger, happiness, fear, surprise, sadness, disgust and neutral emotions from facial information. By investigating children’s performance in detecting these emotions from a specific face region, we were interested to know whether children would show differences in recognizing these expressions from the upper or lower face, and if any difference between specific facial regions depended on the emotion in question. For this purpose, a group of 6-7 year-old children was selected. Participants were asked to recognize emotions by using a labeling task with three stimulus types (region of the eyes, of the mouth, and full face). The findings seem to indicate that children correctly recognize basic facial expressions when pictures represent the whole face, except for a neutral expression, which was recognized from the mouth, and sadness, which was recognized from the eyes. Children are also able to identify anger from the eyes as well as from the whole face. With respect to gender differences, there is no female advantage in emotional recognition. The results indicate a significant interaction ‘gender x face region’ only for anger and neutral emotions. PMID:27247651

  20. Artifact Removal from Biosignal using Fixed Point ICA Algorithm for Pre-processing in Biometric Recognition

    NASA Astrophysics Data System (ADS)

    Mishra, Puneet; Singla, Sunil Kumar

    2013-01-01

    In the modern world of automation, biological signals, especially Electroencephalogram (EEG) and Electrocardiogram (ECG), are gaining wide attention as a source of biometric information. Earlier studies have shown that EEG and ECG show versatility with individuals and every individual has distinct EEG and ECG spectrum. EEG (which can be recorded from the scalp due to the effect of millions of neurons) may contain noise signals such as eye blink, eye movement, muscular movement, line noise, etc. Similarly, ECG may contain artifact like line noise, tremor artifacts, baseline wandering, etc. These noise signals are required to be separated from the EEG and ECG signals to obtain the accurate results. This paper proposes a technique for the removal of eye blink artifact from EEG and ECG signal using fixed point or FastICA algorithm of Independent Component Analysis (ICA). For validation, FastICA algorithm has been applied to synthetic signal prepared by adding random noise to the Electrocardiogram (ECG) signal. FastICA algorithm separates the signal into two independent components, i.e. ECG pure and artifact signal. Similarly, the same algorithm has been applied to remove the artifacts (Electrooculogram or eye blink) from the EEG signal.

Top